As a researcher with experience in tech regulation and artificial intelligence, I find the ongoing investigation into Microsoft’s Bing search engine by European regulators to be of great concern. The potential risks associated with generative AI capabilities, such as deepfakes, automated manipulation of services, and misinformation, are not to be taken lightly.
As a researcher, I’m sharing some information I’ve come across regarding Microsoft and their ongoing regulatory scrutiny over potential risks linked to the artificial intelligence (AI) functionalities of their Bing search engine. European regulators have set a deadline for Microsoft to submit details about Copilot and Image Creator – two AI features – to the EU Commission by May 27th.
If Microsoft surpasses this deadline set by the EU’s Digital Services Act, it risks being penalized up to 1% of its total annual income gained from worldwide operations. This potential penalty could exceed $2 billion, considering Microsoft generated a revenue of over $211 billion in the year 2023.
Under the Digital Services Act, we require Microsoft to disclose details about potential risks related to generative AI on Bing. These risks include phenomena like “hallucinations” and deepfakes, in addition to automated manipulations capable of deceiving voters.
— European Commission (@EU_Commission) May 17, 2024
As an analyst, I’ve come across the Commission’s warning regarding Bing’s generative AI systems and their potential use in spreading misinformation, deepfakes, or manipulating voters through misleading information. The fear is that these AI-generated hallucinations, which involve false content, could be harmful.
A representative from the European Union stated, “Bing’s generative AI could potentially present risks, including ‘hallucinations,’ deepfakes, and the covert manipulation of services capable of deceiving voters.”
As a researcher studying Microsoft’s response to the commission’s request, I can share that Microsoft has yet to make any public statements on this matter. However, based on past practices, it is likely that the company will argue that its AI systems are equipped with robust safety features and content moderation tools capable of identifying and eliminating harmful outputs.
Read More
Sorry. No data so far.
2024-05-19 03:33