Adobe Launches AI@Adobe Working Group and Implements Guidelines for Internal Use of Generative AI Apps In a recent internal email sent by Adobe's Chief Information Officer, Cindy Stoddard, the software giant announced the launch of an internal working group called AI@Adobe.
The purpose of this group is to aid the company's adoption of generative AI applications and ensure responsible and ethical exploration of this emerging technology within the organization. The email detailed several guidelines and restrictions that Adobe employees must adhere to when using generative AI apps. One of the notable restrictions prohibits the use of personal email accounts and corporate credit cards when signing up for these applications.
By enforcing this policy, Adobe aims to safeguard sensitive company and customer information by ensuring that data remains within the organization's approved channels. Furthermore, employees are strongly advised to opt out of allowing AI apps to use their data for machine learning training. This measure reinforces Adobe's commitment to protecting employee data privacy and preventing any unintended consequences stemming from the use of generative AI. In line with data privacy concerns,
Stoddard emphasized that employees must not share personal or non-public Adobe data, which includes financial information. This restriction aims to maintain the confidentiality of sensitive information and prevent any potential breaches or unauthorized access. The decision to implement these guidelines is notable considering Adobe's position as a major software provider and its expanding portfolio of generative AI products. While actively embracing this technology in their external offerings, Adobe is taking additional security measures to ensure responsible use within the company.
Adobe is not alone in implementing such guidelines. Other tech giants like Amazon, Apple, Alphabet, and Samsung have also restricted the use of generative AI tools by their employees. These measures reflect the industry's recognition of the importance of responsible data usage and privacy in the era of AI technologies. To further assist employees in navigating the usage of AI tools, Stoddard shared a list of do's and don'ts.
The recommendations include finding approved software within Adobe's internal Workspace Store to ensure compliance with company policies. Additionally, employees are encouraged to verify the correctness of AI outputs and contact Adobe Security if accidental data exposure occurs. On the other hand, there are several practices that employees should avoid. These include disclosing personal or non-public Adobe data, using personal email accounts for work-related tasks, making work-related software purchases on personal credit cards, and using AI outputs verbatim without review.
In the email, Stoddard provides examples of different data classifications for employees to follow when using ChatGPT-type AI apps. These categories serve as a guide to assess the suitability of using generative AI in certain scenarios. The classifications include restricted data, confidential data, internal data, and public data. For instance, Stoddard advises against using generative AI to summarize sensitive financial or customer data, review source code, write emails based on internal information, or utilize unverified information from AI models in business projects.
With these guidelines in place, Adobe demonstrates its commitment to responsible and secure usage of generative AI technologies internally. By establishing strict protocols, the company aims to prevent any potential data breaches or misuse of sensitive information that could arise from the implementation of such tools. For readers looking for more information or tips related to Adobe or any other topic covered in this article, please reach out to our reporter at [insert reporter's contact information].