Google’s parent company Alphabet has reportedly warned its employees not to provide any confidential information to the tech giant’s AI chatbot bard, JEE News reported on Thursday, citing sources. .
The instructions were shared as part of the company’s policy, seeking to preserve its long-standing policy of protecting information.
Chatbots – including OpenAI’s ChatGPT – are human-like bots capable of solving complex problems because they are programmed with Artificial General Intelligence (AGI). They also have the ability to communicate like humans.
According to the researchers, just as human reviewers can read chats, a similar AI can reproduce the data it absorbed during training, creating a risk of information leakage.

JEE News reported that Alphabet also warned its engineers to avoid direct use of computer code that could generate chatbots.
Google’s parent said that while Bard may make unwanted code suggestions, it still helps programmers while noting that it aims to be transparent about the limitations of its technology.
The recent concerns highlight how the technology giant is moving closer to preventing any damage to its business after announcing its product to compete with ChatGPT.

As of February, Google told staff testing Bard before its launch not to give it inside information, Insider reported.
Now Google offers Bard as a springboard for creativity in more than 180 countries and 40 languages, and its warnings extend to its code suggestions.
Google told JEE News it had held detailed discussions with Ireland’s Data Protection Commission and was addressing regulators’ questions, following a Politico report on Tuesday that the company was delaying Bard’s EU launch this week. pending further information on the impact of chatbots on privacy.
Leakage of information
A Google privacy notice updated on June 1 also says: “Don’t include confidential or sensitive information in your Bard conversations.”
Some companies have developed software to address such concerns. For example, Cloudflare, which defends websites against cyberattacks and offers other cloud services, is marketing the ability for businesses to tag certain data and prevent it from flowing externally.
Google and Microsoft are also offering conversational tools to business users that will come at a higher price but avoid absorbing data into public AI models. The default setting in Bard and ChatGPT is to save users’ chat history, which users can choose to delete.
Yusuf Mehdi, Microsoft’s chief consumer marketing officer, said: “It’s understandable that companies wouldn’t want their staff to use public chatbots for work.”
“Companies are taking a fairly conservative approach,” he said, explaining how Microsoft’s free Bing chatbot compares with its enterprise software. “Over there, our policies are much more stringent.”



