Alphabet Inc. Raises Concerns over Internal Use of AI Chatbots


Key Points
Alphabet Inc., the parent company of Google and a global advocate for artificial intelligence (AI), has reportedly issued warnings to its employees about exercising caution when using AI chatbots, including its own “Bard” platform.



While Alphabet actively promotes AI technology worldwide, recent reports indicate that the company privately cautions employees against entering sensitive information into AI chatbots to avoid violating the company’s information security policies. This development has been confirmed by Alphabet. The concerns stem from the possibility that human reviewers may read chat conversations, and the AI itself may replicate data absorbed during training, potentially leading to data leakage risks.


Similar to Alphabet, several other prominent companies, including Samsung Electronics,, and Deutsche Bank, have decided to fortify their defenses against AI chatbots to prevent the leakage of confidential business information.


Notably, a Fishbowl survey conducted on nearly 12,000 respondents revealed that as of January 2023, approximately 43% of professionals use ChatGPT or other AI tools, often without their superiors’ knowledge. AI chatbots have the capability to assist in drafting emails, documents, and even software, significantly enhancing the speed at which human employees can carry out tasks. However, the content generated by these chatbots may include erroneous information, sensitive data, or even plagiarized excerpts from books like “Harry Potter.”


Apple Inc. CEO Tim Cook recently disclosed that the company has started using AI chatbot “ChatGPT,” while also carefully evaluating this technology. Cook acknowledges that large language models (LLMs) like ChatGPT and Bard hold tremendous potential but may also introduce biases, misinformation, and potentially worse issues.


The Wall Street Journal reported on May 18th that Apple, concerned about internal confidential leaks, has banned some employees from using ChatGPT and other external AI tools. Simultaneously, the company is accelerating the development of its own similar technology.


According to the report, Apple not only limits the use of AI software like ChatGPT but also prohibits employees from utilizing Microsoft Corp.’s GitHub Copilot, an automated software coding tool. Insiders claim that Apple is actively working on developing its own large language model (LLM).

Alphabet Inc.’s cautionary stance regarding the internal use of AI chatbots highlights the growing concerns surrounding potential information security risks. As companies like Alphabet, Samsung,, and Deutsche Bank implement measures to guard against data leakage and protect confidential business information, the importance of securing AI technologies becomes increasingly apparent.