Think AI tools aren’t collecting your data? Guess again

The meteoric rise of generative AI has created a real tech sensation with user-centric products like OpenAIs ChatGPT, Dall-E, and Lensa. But the boom in user-friendly AI has come in tandem with users seemingly unaware or left in the dark about the privacy risks these projects pose.

Amidst all this hype, however, international governments and major tech figures are starting to sound the alarm. Citing privacy and security concerns, Italy just temporarily banned ChatGPT, potentially inspiring a similar blockade in Germany. In the private sector, hundreds of AI researchers and technology leaders, including Elon Musk and Steve Wozniak, have signed an open letter urging a six-month moratorium on AI development beyond the scope of GPT-4.

The relatively quick action to try to curb the irresponsible development of AI is commendable, but the broader landscape of threats AI poses to privacy and data security goes beyond one model or one developer. While no one wants to rain AI’s paradigm-shifting capabilities on the parade, its shortcomings need to be addressed head-on now lest the consequences become catastrophic.

AI data privacy storm

While it would be easy to argue that OpenAI and other Big Tech-powered AI projects are solely responsible for the AI ​​data privacy problem, the topic had been addressed long before it entered the mainstream. The data privacy scandals in AI have occurred before this crackdown on ChatGPT, they have mostly occurred out of the public eye.

Just last year, Clearview AI, an AI-powered facial recognition company used by thousands of government and law enforcement agencies with limited public knowledge, was banned from selling facial recognition technology to private companies in the United States. United. Clearview was also fined $9.4 million in the UK over its illegal facial recognition database. Who’s to say that consumer-centric visual AI projects like Midjourney or others can’t be used for similar purposes?

The problem is that they already have been. A series of recent deepfake scandals involving pornography and fake news created through consumer-level AI products have only increased the urgency to protect users from nefarious use of AI. It takes a hypothetical concept of digital mimicry and makes it a very real threat to ordinary people and influential public figures.

Related: Elizabeth Warren wants the police at your door in 2024

Generative AI models rely heavily on new and existing data to build and strengthen their capabilities and usability. It’s part of why ChatGPT is so impressive. That said, a model that relies on new data inputs needs somewhere to get that data from, and some of that will inevitably include the personal data of the people using it. And that amount of data can easily be misused if centralized entities, governments or hackers get hold of it.

So, with a limited scope for comprehensive regulation and mixed opinions on AI development, what can companies and users working with these products do now?

What businesses and users can do

The fact that governments and other developers are raising flags on AI now actually indicates progress on the glacial pace of regulation for Web2 applications and cryptocurrencies. But raising flags is not the same as oversight, so maintaining a sense of urgency without being alarmist is essential to creating effective regulations before it’s too late.

Italy’s ban on ChatGPT isn’t the first attack governments have undertaken against AI. The EU and Brazil are all passing acts to sanction certain types of use and development of AI. Similarly, the potential of generative AIs to conduct data breaches has sparked early legislative action by the Canadian government.

The problem of AI data breaches is quite serious, to the point that OpenAI had to step in. If you opened ChatGPT a couple of weeks ago, you might have noticed that the chat history feature was greyed out. OpenAI has temporarily discontinued the feature due to a major privacy issue where prompts from strangers were exposed and payment information was revealed.

Related: Don’t be surprised if the AI ​​tries to sabotage your cryptocurrency

While OpenAI has effectively put out this fire, it can be hard to trust programs led by Web2 giants who cut their AI ethics teams to preemptively do the right thing.

Industry-wide, an AI development strategy that focuses more on federated machine learning would also increase data privacy. Federated learning is a collaborative AI technique that trains AI models without anyone having access to the data, instead using multiple independent sources to train the algorithm with their own datasets.

On the user front, becoming an AI Luddite and giving up using any of these programs altogether is unnecessary and will probably be impossible soon enough. But there are ways to be smarter about which generative AI you grant access to in daily life. For enterprises and small businesses incorporating AI products into their operations, being vigilant about what data feeds the algorithm is even more vital.

The evergreen saying that when you use a free product, your personal data AND the product still applies to artificial intelligence. Keeping this in mind might make you reconsider which AI projects you spend your time on and what you actually use it for. If you’ve been on every single social media trend that involves submitting photos of yourself to a shady AI-powered website, consider skipping it.

ChatGPT reached 100 million users just two months after its launch, a staggering figure that clearly indicates that our digital future will use artificial intelligence. But despite these numbers, artificial intelligence still isn’t ubiquitous. Regulators and businesses should leverage this to their advantage to proactively build frameworks for responsible and secure AI development instead of chasing after projects once it gets too big to control. At present, the development of generative AI is not balanced between protection and progress, but there is still time to find the right path to ensure that user information and privacy remain at the forefront.

Ryan Paterson is the president of Unplugged. Prior to taking the reins at Unplugged, he was founder, president and CEO of IST Research from 2008 to 2020. He exited IST Research in a sale of the company in September 2020. He toured twice with the Defense Advanced Research Agency and 12 years in the United States Marine Corps.

Erik Prince is an entrepreneur, philanthropist and Navy SEAL veteran with business interests in Europe, Africa, the Middle East and North America. He served as founder and president of Frontier Resource Group and as founder of Blackwater USA, a provider of global security, training and logistics solutions to the US government and other entities before selling the company in 2010.

This article is for general informational purposes only and is not intended to be and should not be relied upon as investment or legal advice. The views, thoughts and opinions expressed herein are those of the authors only and do not necessarily reflect or represent the views and opinions of Cointelegraph.

#tools #arent #collecting #data #Guess

Leave a Comment