As a seasoned crypto investor with a keen interest in technology and its ethical implications, I’m glad to see Microsoft taking a proactive approach to ensure the safety and responsible deployment of their AI products. The recent increase in staffing on their artificial intelligence product safety team is a positive sign that the company is prioritizing transparency and accountability.
As an analyst, I would put it this way: Last year, I observed that Microsoft Corp. expanded its workforce on the artificial intelligence product safety team by hiring additional 50 members, bringing the total headcount to 400.
The company revealed in its inaugural AI transparency report, published on Wednesday, that approximately half of its workforce is fully committed to this initiative. They detailed measures ensuring the ethical deployment of their offerings, while new recruits have recently joined the team as well.
Last year, Microsoft dissolved its Ethics and Society team as part of broader tech industry cost-cutting measures. Consequentially, safety and trust departments were eliminated at various companies, including Google and Meta Platforms.
Microsoft aims to boost trust in its AI tools following concerns over their capability to generate unusual and even harmful content from incidents observed with the Copilot chatbot in February.
As an analyst, I came across some concerning information regarding Microsoft’s Copilot Designer, an AI picture production tool. Last month, I brought this matter to the attention of the Federal Trade Commission, the board, and legislators by sending them emails, expressing my concerns over the company’s insufficient efforts to prevent the creation of violent and abusive images using this tool.
Microsoft relies on the foundational methodology created by the National Institute for Standards and Technology for ensuring safe deployment of artificial intelligence systems.
Last year, under Executive Order issued by President Joe Biden, I found myself tasked with the responsibility of establishing guidelines within my agency, a subdivision of the Department of Commerce, regarding this emerging technology.
Microsoft announced in its initial report that they have introduced thirty responsible AI features, some of which increase the challenge for users looking to make AI chatbots behave unusually.
As a crypto investor, I’m always on the lookout for ways to safeguard my investments from potential threats. One such threat is prompt injection attacks or jailbreaks, which can manipulate an AI model into behaving unpredictably. To protect against these malicious attempts, companies have developed what are called “prompt shields.” In simpler terms, these shields act as a barrier, identifying and preventing intentional efforts to interfere with the AI’s normal functioning.
Read More
Sorry. No data so far.
2024-05-01 21:56