As an analyst with extensive experience in data privacy and artificial intelligence, I strongly believe that Noyb’s complaint against OpenAI is a significant step towards ensuring transparency and accountability in AI-driven systems. The increasing use of large language models (LLMs) by companies raises legitimate concerns about the accuracy and security of personal data being processed.
Noyb, the European advocacy group for digital rights headquartered in Vienna, Austria, has filed a complaint with the Austrian data protection authority, requesting an investigation into OpenAI’s handling of personal data in their large language models (LLMs), raising concerns over the models’ accuracy.
Based on De Graaf’s findings, businesses encounter difficulties in adhering to EU laws regarding the handling of personal data, specifically when developing chatbots similar to ChatGPT.
Noyb’s recent action isn’t an isolated event. A study published by two European non-profit organizations in December 2023 revealed concerns about Microsoft’s Bing AI chatbot, named Copilot, which disseminated misleading or inaccurate information during political elections in Germany and Switzerland.
The chatbot provided incorrect responses concerning candidate details, poll results, and scandals, in addition to falsely identifying the origin of its information.
Just like Google’s Gemini AI chatbot, which isn’t part of the EU, received backlash for producing “progressive” and incorrect visuals. In response, Google issued an apology and promised to revise their model.
As a crypto investor, I’ve noticed with increasing alarm how incidents involving AI-driven systems have brought into question their dependability and adherence to regulations. This raises important concerns about the potential implications for data privacy and accuracy, not just within the European Union but globally as well. It’s crucial that we scrutinize these developments closely.
Read More
Sorry. No data so far.
2024-04-29 13:56