Google warns its employees: do not use the code generated by Bard

AI in short Google has warned its employees not to disclose confidential information or use the code generated by its AI chatbot, Bard.

The policy isn’t surprising, given that the chocolate factory has also advised users not to include sensitive information in their conversations with Bard in an updated privacy policy. Similarly, other large companies have warned their staff against leaking documents or proprietary code and banned them using other AI chatbots.

Google’s internal warning, however, raises concerns that AI tools created by private companies cannot be trusted, especially if the creators themselves do not use them due to privacy and security risks.

Warning its employees not to directly use Bard-generated code undermines Google’s claims that its chatbot can help developers become more productive. The ruler of search and ads said Reuters its internal ban was introduced because Bard can issue “unwanted code hints”. Problems could potentially lead to buggy programs or complex, bloated software that will cost developers more time to fix than if they didn’t use AI to code at all.

The Microsoft-backed voice AI maker is being sued

Nuance, a developer of speech recognition software acquired by Microsoft, was accused of recording and using people’s voices without permission in an amended lawsuit filed last week.

Three people have sued the company and accused it of violating the California Invasion of Privacy Act, which states that companies cannot intercept consumer communications or log people without their explicit written consent. The plaintiffs say Nuance is recording people’s voices in phone calls with call centers, which use its technology to verify the caller.

“Nuance performs its voice testing entirely in the background of every engagement or phone call,” the plaintiffs said. “In other words, Nuance listens silently to the consumer’s voice in the background of a call, and in such a way that consumers probably won’t realize they are unknowingly interacting with a third-party company. This surreptitious voice captures, records, examines , and the analytics process is a major component of Nuance’s overall biometric security suite.”

They argue that recording people’s voices exposes them to risks they could be identified when discussing sensitive personal information and means their voices could be cloned to bypass Nuance’s security features.

“If left unchecked, California citizens run the risk of having their voices unknowingly analyzed and mined for data by third parties to make various decisions about their lifestyle, health, credibility, trustworthiness and most importantly determine whether they are indeed who they claim to be. be,” the court records state.

The register asked Nuance for a comment.

Google doesn’t support the idea of ​​a new federal AI regulatory agency

Google’s DeepMind AI lab doesn’t want the US government to set up an agency focused solely on AI regulation.

Instead, according to a 33-page report, he believes the work should be split between different departments [PDF] obtained from Washington Post. The document was presented in response to an open request for public comment launched by the National Telecommunications and Information Administration in April.

Google’s AI subsidiary has called for “a multi-layered, multi-stakeholder approach to AI governance” and advocated a “hub-and-spoke approach” whereby a central body such as NIST could oversee and lead the policies and issues addressed by numerous agencies with different areas of expertise.

“AI will present unique issues in financial services, healthcare and other regulated industries and problem areas that will benefit from the expertise of regulators with experience in those industries that perform better than a new regulatory agency promulgating and implementing rules at upstream that are not adaptable to the different contexts in which artificial intelligence is employed,” says the document.

Google’s DeepMind view differs from other companies including OpenAI and Microsoft, policy experts and lawmakers who support the idea of ​​building an AI-focused agency to tackle regulation.

Microsoft rushed to release the new Bing despite OpenAI’s warnings

OpenAI has reportedly warned Microsoft about releasing its GPT-4-based Bing chatbot too quickly, considering it could generate false information and inappropriate language.

Bing shocked users with its creepy tone and sometimes manipulative or threatening behavior when it launched. Later, Microsoft restricted conversations to prevent the chatbot from going off the rails. OpenAI previously urged the tech titan to hold off on releasing the product to work on its issues.

But Microsoft didn’t seem to listen and went ahead anyway, according to the Wall Street Journal. However, this has not been the only conflict between AI advocates. Months before the launch of Bing, OpenAI released ChatGPT despite concerns from Microsoft that it could steal the limelight from its AI-powered web search engine.

Microsoft owns a 49% stake in OpenAI and manages to access and distribute the startup’s technology before rivals. Unlike GPT-3, however, Microsoft does not have exclusive rights to license GPT-4. At times, this can make things awkward. OpenAI often courts the same customers of Microsoft or other companies that compete directly with its investor.

Over time, this could make their relationship difficult. “What puts them on a collision course the most is that both sides have to make money,” said Oren Etizoni, former CEO of the Allen Institute for Artificial Intelligence. “The conflict is that both will try to make money with similar products.

#Google #warns #employees #code #generated #Bard

Leave a Comment