Uncover prompt injection, insider threats with the Tenable One Model Refusal Detection
Tenable One's new Model Refusal Detection turns an LLM's refusal to execute a risky or suspicious prompt into a high-fidelity early warning signal. It helps you uncover and stop prompt injection attacks, insider threats, and other risky behaviors before they escalate into a breach.
Security for AI: A guide to managing the risks of vibe coding and AI in software development
Get a template for an AI coding acceptable use policy with security controls and a list of 25 security questions to ask software developers and “citizen developers” about their AI use. Mitigate the security risks of vibe coding and using AI in software development with Tenable One.
了解 Tenable Hexa AI:面向暴露风险管理的代理式 AI
了解 Tenable Hexa AI:Tenable One 暴露风险安全管理平台的代理引擎。 本文将介绍 Tenable Hexa AI 如何自动化复杂的安全工作流,将暴露风险情报转化为协同行动,助力安全团队切实降低网络安全风险。
Don't confuse asset inventory with exposure management
Asset discovery tells you what IT exists in your environment. Exposure management tells you what will get you breached. If your platform can't connect vulnerabilities, identities, misconfigurations, and AI systems into real attack paths, you don't have exposure management. You have inventory.
LeakyLooker: Hacking Google Cloud’s Data via Dangerous Looker Studio Vulnerabilities
Tenable Research revealed "LeakyLooker," a set of nine novel cross-tenant vulnerabilities in Google Looker Studio. These flaws could have let attackers exfiltrate or modify data across Google services like BigQuery and Google Sheets. Google has since remediated all identified issues.
Gartner® 在 2025 年发布的报告中,将 Tenable 评为当前 AI 驱动的暴露风险评估领域的“Company to Beat”(最具挑战性的领军公司)
Gartner在“AI 供应商竞赛:Tenable 是 AI 驱动的暴露风险评估领域的‘Company to Beat’(最具挑战性的领军公司)”中写道:“Tenable 的资产和攻击面覆盖范围、对 AI 的采用以及在漏洞评估领域的声誉,使其成为 AI 驱动的暴露风险评估领域的领跑者”。
What Anthropic’s Latest Model Reveals About the Future of Cybersecurity
AI can find vulnerabilities with unprecedented speed, but discovery alone doesn’t reduce cyber risk. We need exposure prioritization, contextual risk analysis, and AI-driven remediation to transform findings into security outcomes.
From Clawdbot to Moltbot to OpenClaw: Security Experts Detail Critical Vulnerabilities and 6 Immediate Hardening Steps for the Viral AI Agent
Moltbot, the viral AI agent, is riddled with critical vulnerabilities, exposed control interfaces, and malicious extensions that put users' sensitive data at risk. Understand the immediate security practices you can implement to mitigate this enormous agentic AI security risk.
推出 Tenable One AI Exposure: 打造大规模 AI 使用安全新标准
借助 Tenable One AI Exposure,可持续发现并监控企业内部所有 AI 使用场景,包括影子 AI、代理、浏览器插件等各类应用。通过映射复杂的 AI 工作流,精准识别高影响暴露风险,并确保 AI 使用符合安全和可接受使用策略。
Security for AI: How Shadow AI, Platform Risks, and Data Leakage Leave Your Organization Exposed
Your employees are using AI whether you’ve sanctioned it or not. And even if you’ve carefully vetted and approved an enterprise-grade AI platform, you’re still at risk of attacks and data leakage.
Microsoft Copilot Studio Security Risk: How Simple Prompt Injection Leaked Credit Cards and Booked a $0 Trip
The no-code power of Microsoft Copilot Studio introduces a new attack surface. Tenable AI Research demonstrates how a simple prompt injection attack of an AI agent bypasses security controls, leading to data leakage and financial fraud. We provide five best practices to secure your AI agents.
Detecting AI Security Risks Requires Specialized Tools: Time to Move Beyond DLP and CASB
Learn why your existing security tech won’t detect data exposure, prompt injection and manipulation, and other AI security risks from ChatGPT Enterprise, Microsoft 365 Copilot, and other LLMs.