Two employees look at a computer tablet at a conference table.
Feature

5 Key Principles in a Generative AI Policy for Employees

3 minute read
Tristan Barnum avatar
By
SAVED
What should be included in a generative AI policy for employees?

Generative AI tools such as ChatGPT, GitHub Copilot and language models from Anthropic and Google are rapidly changing how work gets done. With the ability to understand natural language prompts and generate human-like text outputs, generative AI (GAI) can boost productivity for knowledge workers in ways not seen since the advent of the internet itself.

But organizations are ill advised to simply let employees start randomly plugging sensitive data and IP into these AI models without guardrails. There are real risks in inadvertently disclosing proprietary information.

As some recent reporting has shown, mishandling this powerful technology can lead to potentially damaging and legally precarious situations. For example, 84% of workers who use generative AI at work have publicly exposed their company’s data in the last three months, according to a 16-country, 15,000-person study conducted by the Oliver Wyman Forum.

And a global study by Salesforce of 14,000 workers across 14 countries revealed that more than half of workers are using generative AI tools in the workplace without formal approval from their employers. These employees likely recognize the importance of generative AI but lack clear guidelines on its ethical and safe usage. In addition, Salesforce survey respondents were also engaging in some ethically questionable activities at work when using generative AI: 64% have passed off generative AI work as their own.

Concerns about security led The White House to ban staffers from using Microsoft Copilot, an AI assistant. Guidance from The White House’s chief administrative officer notes, “The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking house data to non-house approved cloud services.”

All of this underscores why companies can't simply let employees start using GAI tools with no direction, potentially plugging proprietary data into generative AI models without firm guardrails in place. Guardrails can derail a variety of other risks, including IP and licensing issues with AI outputs and employees blindly treating large language model (LLM) responses as the absolute truth.

To balance the massive potential of generative AI with the mitigation of potential risks, companies must develop thoughtful policies and guidelines for how employees can appropriately utilize these tools. At Wildfire, we've prioritized creating processes that ensure the safe and secure usage of GAI. Here are some of the key principles we've incorporated into our GAI usage policy:

1. Identify Approved Use Cases

The first step is clarifying where GAI can provide legitimate value to employees. Our policy outlines approved use cases such as:

  • Coding support: Using GAI as an intelligent pair programmer to prototype code, write tests, etc., with the understanding that any outputs need to be carefully reviewed
  • Document and presentation creation: Leveraging GAI to save time on generating first drafts based on prompts and guidance
  • Research assistance: Using GAI's broad knowledge base to find insights more efficiently than manual searching or sifting through information to find patterns

Wherever AI can enhance productivity without completely replacing human judgment and effort, it is given a green light.

2. Establish Licensing and IP Protection Guidelines

For use cases involving IP or code that could be generated by the AI, it's critical to set rules on licensing and protecting proprietary information. As an example, our policy specifically requires employees to:

  • Only use AI-generated code as reference material to be fully rewritten by the developer
  • Avoid using proprietary technical info, product road maps, etc. as GAI prompts
  • Read through prompts carefully to ensure no sensitive data is included

Ultimately, employees need to understand they are responsible for the end product, just as they would if it were created manually.

3. Provide Opt-Out Options for Data Sharing

To limit exposure of proprietary company data, our policy recommends employees opt out of allowing data sharing with AI providers wherever possible. This shuts off potential vectors for information leaks.

For enterprise AI tools, such as GitHub Copilot, there is no data sharing. But for ChatGPT and other publicly available models, the policy links to opt-out forms that disable inputting data from user prompts to further the model’s training. This is a non-negotiable requirement for our teams.

4. Define Approval and Review Processes

Rather than allowing a free-for-all, our policy specifies a review process for any new GAI tools that employees want to try out. Employees must first get approval from the security and compliance teams, who will vet the data practices and potential risks. This acts as a filter to prevent rogue adoption of risky models and promotes alignment on fully approved tools from the top down.

Learning Opportunities

5. Iterate as Policies Evolve

Of course, the best practices mentioned above are based on our initial take on governing GAI usage. We fully expect to iterate and add or revise our policy as we learn more. Our policy acknowledges it is not comprehensive and will be updated as the landscape shifts, especially with emerging regulations and court rulings on AI licensing, IP and data protection.

As with other emerging technologies, with great power comes great responsibility to harness these new technologies productively, staying within the proper guardrails without hampering creativity and innovation. By developing clear guidelines and an approval process, companies can embrace the GAI productivity revolution on firmer ethical and legal footing.

About the Author
Tristan Barnum

Tristan Barnum oversees the marketing and client success teams at Wildfire, a customer loyalty and rewards company based in Solana Beach, California. Prior to Wildfire, Barnum co-founded two startups — Tellient, an analytics platform built for the internet of things, and Switchvox, to serve the rapidly growing SMB market for VoIP phone systems. Connect with Tristan Barnum:

Main image: By Amy Hirschi.
Featured Research