Tech

Google, one of AI’s biggest advocates, warns its own employees about chatbots


Alphabet Inc is warning employees about how they use chatbots, including their own Bard, while marketing the program worldwide, four people familiar with the matter told Reuters.

Google’s parent company has advised employees not to enter its confidential documents into AI chatbots, the people said, and the company confirmed, citing its longstanding policy on information protection.

Chatbots, among them Bard and ChatGPT, are human-like programs that use so-called generate artificial intelligence to hold conversations with users and respond to countless prompts. Human evaluators can read the chats, and the researchers found that the same AI could reproduce the data it absorbed during training, creating a leak risk.

Alphabet also warns its engineers to avoid direct use computer code that the chatbot can generate, some people said.

When asked for comment, the company said Bard can make unsolicited code suggestions, but it helps programmers nonetheless. Google also said it wants to be transparent about the limitations of its technology.

The concerns show how Google wants to avoid business harm from the software it released to compete with ChatGPT. At stake in Google’s race against OpenAI and ChatGPT supporters Microsoft Corporation is billions of dollars in investment and is yet to be advertised and cloud revenue from new AI programs.

Google’s caution also reflects what is becoming the security standard for corporations, namely warning employees about the use of publicly available chat programs.

More and more businesses around the world are setting up guardrails on AI chatbots, among them SAMSUNG, Amazon.com and Deutsche Bank, the companies told Reuters. Apple, which did not return a request for comment, is said to have also.

About 43% of professionals are using ChatGPT or other AI tools as of January, often without telling their boss, according to a survey of nearly 12,000 respondents, including from leading companies with based in the United States, made possible by the Fishbowl website.

By February, Google asked employees to test Bard before launch to not provide inside information, Insider reported. Currently Google is rolling out Bard to more than 180 countries and in 40 languages ​​as a springboard for creativity, and its alerts extend to its code recommendations.

Google told Reuters it had detailed conversations with Ireland’s Data Protection Commission and was addressing the regulator’s questions, following a Politico report on Tuesday that the company had postponed Bard’s EU launch this week pending more information on chatbot’s impact on privacy.

worrisome about SENSITIVE INFORMATION

Such technology can compose emails, documents, even software, which promises to significantly speed up tasks. However, this content may contain false information, sensitive data, or even copyrighted passages from the “Harry Potter” novel.

Google’s privacy notice updated on June 1 also states: “Do not include confidential or sensitive information in your Bard chats.”

Several companies have developed software to address such concerns. For instance, Cloudflare, which protects websites from cyberattacks and offers other cloud services, is marketing the ability to businesses to tag and limit some data from flowing out.

Google and Microsoft are also offering conversational tools to enterprise customers at a higher price point, but don’t absorb data into public AI models. The default setting in Bard and ChatGPT is to save the user’s conversation history, the user can choose to delete this history.

Yusuf Mehdi, Microsoft’s director of consumer marketing, said it “makes sense” that companies don’t want their employees to use public chatbots for work.

“Companies are taking a reasonably cautious stance when explaining how Microsoft’s free Bing chatbot compares to their enterprise software,” Mehdi said. “There, our policies are much more stringent.”

Microsoft declined to comment on whether it has an outright ban on employees entering confidential information into public AI programs, including its own, though another executive there told Reuters he had personally limited use.

Matthew Prince, CEO of Cloudflare, says that entering confidential matters into a chatbot is like “unmasking a bunch of PhD students in all your personal files.”

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button