Big IT Companies Are Restricting Employee Use Of AI Chatbots To Prevent Data Leakage

Big IT Companies Are Restricting Employee Use Of AI Chatbots To Prevent Data Leakage

with No Comments

 

While governments and technological groups throughout the world debate AI policies, time is running out. The primary goal is to keep humanity secure from misinformation and the risks that it entails.

And the debate is heating up now that concerns about data privacy are being raised. Have you ever considered the hazards of giving your personal information to ChatGPT, Bard, or other AI chatbots?

If you haven't heard, technological behemoths have been taking substantial steps to avoid information leaks.

After sensitive data was mistakenly released to ChatGPT, Samsung advised their workforce in early May of a new internal policy prohibiting AI technologies on devices running on their networks.

"The company is reviewing measures to create a secure environment for safely using generative AI to enhance employees' productivity and efficiency," a Samsung representative told TechCrunch.

They also stated that while the security safeguards are in place, the corporation will temporarily prohibit the use of generative AI using company equipment.

Apple was another titan that took a similar stance. According to WSJ, Samsung's competitor is similarly concerned about sensitive data leaking. As a result, their limitations include ChatGPT as well as several AI tools used to create code while creating related technology.

Earlier this year, an Amazon lawyer advised employees not to share any information or code with AI chatbots after the business discovered ChatGPT replies that were identical to proprietary Amazon data.

Banks such as Bank of America and Deutsche Bank, in addition to the Big Techs, are taking restrictive procedures internally to avoid financial information leaks.

And the list is still expanding. You guessed it! Even Google got involved.

 

Google, Are You Listening?

According to Reuters' unidentified sources, Alphabet Inc. (Google's parent company) told staff last week not to enter sensitive information into AI chatbots. This includes their own AI, Bard, which was released in the United States last March and is now being rolled out to another 180 nations in 40 languages.

Google made the decision after researchers discovered that chatbots could replicate the data entered through millions of prompts, making it available to human reviewers.

Alphabet urged their engineers to avoid embedding code into chatbots because AI can replicate them, potentially resulting in the disclosure of secret data from their technology. Not to add that they prefer their AI competitor, ChatGPT.

Google acknowledges its intention to be open about the limitations of its technology and has amended its privacy notice to advise users "not to include confidential or sensitive information in their conversations with Bard."

 

On The Dark Web Marketplace, There Are Over 100,000 ChatGPT Accounts

Another aspect that could result in sensitive data exposure is that, as AI chatbots become more popular, employees all around the world are using them to streamline their routines. The majority of the time, without any caution or oversight.

Group-IB, a Singapore-based global cybersecurity solutions provider, stated yesterday that they discovered over 100k compromised ChatGPT accounts infected with stored credentials in their logs. Since last year, this stolen data has been exchanged on illegal dark web marketplaces. They emphasized that ChatGPT saves the history of questions and AI responses by default, and the lack of basic care exposes many firms and their personnel.

 

Regulations Are Pushed By Governments

Not only do businesses worry about information leakage caused by AI. Italy ordered OpenAi to stop processing data from Italian users in March after discovering a data breach in OpenAI that allows users to examine the titles of talks from other users with ChatGPT.

OpenAi confirmed the flaw in March. "We had a significant issue in ChatGPT due to a bug in an open-source library, which has now been fixed and we have just completed validating." Only a small number of users could access the titles of other users' discussion histories. "We feel terrible about this," stated Sam Altman on Twitter at the time.

The United Kingdom produced an AI white paper on its official website in order to promote responsible innovation and public confidence, taking into account the following five principles:

  • safety, security, and sturdiness
  • transparency and comprehensibility;
  • fairness;
  • governance and accountability
  • as well as contestability and reparation

 

When we can see, when AI becomes more prevalent in our lives, particularly at the rate at which it is developing, new worries naturally arise. Security measures become required as developers attempt to reduce risks without jeopardizing the evolution of what we already acknowledge as a significant step forward.