HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms.
網路安全快照: Top Guidance for Improving AI Risk Management, Governance and Readiness
Many organizations are playing catch-up in key AI security policy areas, such as usage governance, risk oversight, data protection, and staff training. In this Cybersecurity Snapshot special edition, we round up recent guidance on preparing for, managing and governing AI cyber risks.
Security for AI: A Practical Guide to Enforcing Your AI Acceptable Use Policy
An AI acceptable use policy can help your organization mitigate the risk of employees accidentally exposing sensitive data to public AI tools. Benchmark your organization’s policy against our best practices and discover how prompt-level visibility from Tenable AI Exposure eases policy enforcement.
Cybersecurity Awareness Month Is for Security Leaders, Too
Think you know all there is to know about cybersecurity? Guess again. Shadow AI is challenging security leaders with many of the same issues raised by other “shadow” technologies. Only this time, it’s evolving at breakneck speed.
Synack + Tenable: AI-Powered Partnership Translates Vulnerability Insights into Action
The combined Synack/Tenable solution reduces alert noise for overloaded security teams, isolating the most exploitable threats so they can proactively close security gaps faster.
Why Google’s Warning Highlights Critical Risk of AI Context-Injection Attacks
Google, with its unparalleled visibility into Gemini, recently alerted its legion of Gmail users about indirect prompt attacks, which exploit AI context sources like emails, calendar invites and files. Coming from a major AI vendor, the frank and direct public alert leaves no doubt that…
Tenable 对 GPT-5 实施越狱攻击,在使用 OpenAI 新安全技术下仍生成危险信息
在 OpenAI 的 GPT-5 发布后不到 24 小时内,Tenable Research 就设法成功对其实施了越狱攻击,使其分享了如何制造爆炸物的详细说明。Our finding is concerning, given that OpenAI described GPT-5's prompt safety technology as significantly…
The AI Security Dilemma: Navigating the High-Stakes World of Cloud AI
AI presents an incredible opportunity for organizations even as it expands the attack surface in new and complex ways. For security leaders, the goal isn't to stop AI adoption but to enable it securely.Artificial Intelligence is no longer on the horizon; it's here, and it's being built and deployed…
推出 Tenable AI Exposure: 告别猜测,开始保护企业 AI 攻击面
Tenable AI Exposure 现已在 Tenable One 中推出,它能让企业清晰了解团队如何使用 AI 平台,以及这种使用可能会在哪些方面将企业的数据、用户和防御措施置于风险之中。
CVE-2025-54135, CVE-2025-54136: Frequently Asked Questions About Vulnerabilities in Cursor IDE (CurXecute and MCPoison)
Researchers have disclosed two vulnerabilities in Cursor, the popular AI-assisted code editor, that impact its handling of model context protocol (MCP) servers, which could be used to gain code execution on vulnerable systems.
The White House AI Action Plan: A Critical Opportunity to Secure the Future
AI without built-in cybersecurity remains a liability. The AI Action Plan presents a pivotal opportunity to get this right by emphasizing a secure-by-design approach.
網路安全快照: AI Security Trails AI Usage, Putting Data at Risk, IBM Warns, as OWASP Tackles Agentic AI App Security
Check out fresh insights on AI data security from IBM’s “Cost of a Data Breach Report 2025.” Plus, OWASP issues guide on securing Agentic AI apps. In addition, find out how to protect your org against the Scattered Spider cyber crime group. And get the latest on zero-trust microsegmentation;…