Apple restricts employees from using ChatGPT over fear of data leaks
Apple has restricted employees from using AI tools like OpenAI’s ChatGPT over fears confidential information entered into these systems will be leaked or collected.
According to a report from The Wall Street Journal, Apple employees have also been warned against using GitHub’s AI programming assistant Copilot. Bloomberg reporter Mark Gurman tweeted that ChatGPT had been on Apple’s list of restricted software “for months.”
Apple has good reason to be wary. By default, OpenAI stores all interactions between users and ChatGPT. These conversations are collected to train OpenAI’s systems and can be inspected by moderators for breaking the company’s terms and services.
In April, OpenAI launched a feature that lets users turn off chat history (coincidentally, not long after various EU nations began investigating the tool for potential privacy violations), but even with this setting enabled, OpenAI still retains conversations for 30 days with the option to review them “for abuse” before deleting them permanently.
Given the utility of ChatGPT for tasks like improving code and brainstorming ideas, Apple may be rightly worried its employees will enter information on confidential projects into the system. This information might then be seen by one of OpenAI’s moderators. Research shows it’s also possible to extract training data from some language models using its chat interface, though there’s no evidence that ChatGPT itself is vulnerable to such attacks.
Apple’s ban, though, is notable given OpenAI launched an iOS app for ChatGPT this week. The app is free to use, supports voice input, and is available in the US. OpenAI says it will be launching the app in other countries soon, along with an Android version.
Read the full article Here