AI needs privacy and decentralization more than you think | Opinion

As a researcher with extensive experience in the tech industry, I am deeply concerned about the concentration of AI development in the hands of a few powerful corporations and the impact it has on individual and societal privacy. With unprecedented access to our personal lives and sensitive information through tools like screen capturing, keystroke recording, and computer vision, these companies hold significant power over our digital identities.


The dominance of a few powerful corporations in driving AI (artificial intelligence) advancements gives rise to substantial privacy worries for individuals and society as a whole.

Companies with the capability to take screenshots, record keyboard inputs, and observe users continuously through advanced computer vision technology hold an unmatched privilege to access intimate details of our lives and confidential information.

I. The reality is unwelcome for many: Your personal data is managed by a multitude of businesses, potentially numbering in the hundreds or even thousands. You can effortlessly discover this fact using available tools. For most individuals, the tally hovers around several hundred. The advent of artificial intelligence only intensifies this issue.

Businesses globally are integrating OpenAI technology into their software applications. Consequently, all user inputs are routed through OpenAI’s centralized processing servers. Simultaneously, there has been a notable exodus of OpenAI’s safety team members from the organization.

When you install an app such as Facebook, approximately 80% of your data may be gathered. This information could encompass aspects of your lifestyle and interests, online activities, potential demographic details, and even sensitive data like biometric information.

Why do companies collect all this info?

In simpler terms, such a business could potentially generate significant profits if they possess comprehensive customer data. For instance, an eCommerce enterprise aiming for increased sales would be at a disadvantage without intricate knowledge of their clientele. Instead, they’d have to resort to generic marketing efforts that may not yield optimal results.

If they possess comprehensive data about their customers’ demographics, preferences, prior purchases, and digital activities, they can employ artificial intelligence to present highly customized advertisements and suggestions for products, thereby generating substantially higher sales.

With artificial intelligence (AI) becoming increasingly integrated into various sectors of our daily lives, including advertising, social media, finance, and healthcare, there is a heightened concern regarding the potential exposure or misuse of confidential information. Thus, the importance of implementing secure and confidential AI systems cannot be overstated.

The data dilemma

As a crypto investor, I’m constantly conscious of the vast troves of personal information I share with tech giants like Google and OpenAI on a daily basis. Every search query I make, every email I send, and each interaction with their AI assistants is meticulously logged and analyzed. These companies monetize my data by funneling it into intricate algorithms that target ads, suggest content, and keep me glued to their platforms.

Taking things a step further, consider the consequences when AI becomes deeply ingrained in our lives. We engage with it on an intimate level, revealing our innermost thoughts, anxieties, and aspirations. You’ve shared all of yourself with this technology, enabling it to mimic your actions with remarkable precision. This power could be exploited by tech giants, persuading you to purchase goods, influencing your voting decisions, or even encouraging actions that go against your best interests.

As a crypto investor, I’ve come to understand the risks associated with centralized AI. When a few corporations hold the reins of power over the data and the algorithms, they possess an enormous influence over our daily experiences. They have the ability to mold our reality subtly, sometimes even without our awareness. It’s crucial for us to be mindful of this concentration of power and advocate for decentralized solutions that ensure a more equitable distribution of control in the AI ecosystem.

A better future for data and AI

As a researcher exploring ways to address privacy concerns in the realm of data storage and computation for AI systems, I believe the key lies in fundamentally rethinking the infrastructure upon which our data is processed. By constructing systems that embed security and privacy from their very inception, we can pave the way for a future where data and artificial intelligence coexist respectfully with individual rights and the protection of sensitive information.

Utilizing advanced security measures such as hardware isolation for protection, encryption during transmission and storage, secure boot procedures, and trusted execution environments (TEEs), businesses can preserve the confidentiality and authenticity of user data while it undergoes AI processing. By employing these technologies, companies guarantee that privacy is not jeopardized in any stage of the data handling process.

Using this method, you maintain complete authority over your data. You have the power to decide what information to disseminate and to whom. Realizing fully private and secure AI is a intricate problem that demands creative solutions. Although decentralized systems offer hope, only a small number of projects are actively tackling this concern. LibertAI, a project I’m involved with, as well as initiatives like Morpheus, can delve into sophisticated cryptographic methods and decentralized structures to keep data encrypted and under the user’s control throughout the entire AI workflow. These advancements mark significant progress toward unlocking the potential of confidential AI.

The possibilities for using confidential AI are immense. In the healthcare sector, it could facilitate extensive analyses of sensitive medical information without infringing on patient privacy. By doing so, researchers would be able to glean valuable insights from vast quantities of data while maintaining strict confidentiality regarding individual records.

In the financial sector, confidential AI technology can discreetly identify fraudulent activities and money laundering schemes without disclosing individuals’ financial details. This allows banks to pool data and develop shared artificial intelligence models, enhancing collaborative efforts while mitigating risks of leaks or breaches. Furthermore, this innovation opens up a myriad of opportunities beyond finance, such as personalized education and targeted advertising, all while prioritizing privacy concerns. In the decentralized web3 environment, autonomous agents can manage private keys and execute transactions directly on the blockchain, maintaining confidentiality and independence.

Challenges

Certainly, unlocking the complete capabilities of confidential AI is no simple feat. It necessitates overcoming various technical hurdles, such as maintaining the security of encrypted data and avoiding data leaks throughout the computation process.

Companies encounter various regulatory challenges as they work with data privacy and AI. The legal frameworks governing these technologies continue to develop, requiring careful navigation to ensure compliance. For instance, in Europe, the General Data Protection Regulation (GDPR) sets stringent rules, while in the US, the Health Insurance Portability and Accountability Act (HIPAA) imposes specific regulations. These are only a couple of examples, highlighting the intricate legal terrain that businesses must traverse.

Yet, the most significant hurdle might be trust. For confidential AI applications to flourish, individuals must have faith in the security of their data. To achieve this, it’s essential not only to provide advanced technological safeguards but also to maintain transparency and open communication from the companies involved.

The road ahead

In spite of the obstacles, the outlook for confidential AI remains optimistic. With an increasing awareness of data security in various sectors, the need for reliable and secure AI solutions is poised to expand.

Businesses capable of ensuring the secrecy of AI applications will gain a significant edge in competition. They’ll have access to extensive datasets that were once inaccessible due to privacy issues. Moreover, they’ll earn the faith and assurance of their customers.

This isn’t solely about commercial prospects. Instead, it’s essential to establish an AI environment that prioritizes individuals. An environment where privacy is inherently protected and valued, rather than an add-on consideration.

In the rapidly advancing world of artificial intelligence, maintaining privacy and security becomes essential. Confidential AI, with its ability to process sensitive information, could offer significant advantages while ensuring data remains protected. This is a valuable proposition that should not be overlooked.

Jonathan Schemoul

Jonathan Schemoul serves various roles: the CEO of Twentysix Cloud and aleph.im, a founding member of LibertAI, and a seasoned developer with expertise in blockchain technology and artificial intelligence. His areas of specialization include decentralized cloud computing, Internet of Things (IoT), financial systems, and scalable technologies for web3, gaming, and AI. Furthermore, he provides guidance as an advisor to prominent French financial institutions and businesses like Ubisoft, fostering local innovations.

Read More

2024-06-23 15:05