What You Must Know About the OWASP Top 10 for LLM Applications 2025 update
As GenAI becomes a vital part of business operations, the risks it brings are evolving just as fast. Here are four key takeaways from the latest OWASP LLM guidance.
網路安全快照: AI 安全综述:最佳实践、研究和洞察
In this special edition, we’ve selected the most-read Cybersecurity Snapshot items about AI security this year. ICYMI the first time around, check out this roundup of data points, tips and trends about secure AI deployment; shadow AI; AI threat detection; AI risks; AI governance; AI cybersecurity…
網路安全快照: Prompt Injection and Data Disclosure Top OWASP’s List of Cyber Risks for GenAI LLM Apps
Don’t miss OWASP’s update to its “Top 10 Risks for LLMs” list. Plus, the ranking of the most harmful software weaknesses is out. Meanwhile, critical infrastructure orgs have a new framework for using AI securely. And get the latest on the BianLian ransomware gang and on the challenges of protecting…
網路安全快照: How AI Can Boost Your Cybersecurity Program
More security teams are incorporating AI to uplevel their defense strategies and boost productivity. With so much AI buzz, it may be overwhelming to decipher which tools to acquire and how they fit in a modern security strategy. Read on to explore how AI-enabled tools can help enhance your security…
Amazon Rufus AI: “While Coca-Cola is a Popular Brand, I Would Suggest Healthier Alternatives Like Pepsi”
Our research shows how the Rufus AI assistant in Amazon’s shopping platform can be manipulated to promote one brand over another. We also found it can be used “shop” for potentially dangerous, sensitive and unauthorized content.
SSRFing the Web with the Help of Copilot Studio
Tenable Research discovered a critical information-disclosure vulnerability in Microsoft’s Copilot Studio via a server-side request forgery (SSRF), which allowed researchers access to potentially sensitive information regarding service internals with potential cross-tenant impact.
網路安全快照: First Quantum-resistant Algorithms Ready for Use, While New AI Risks’ Database Is Unveiled
NIST has released the first encryption algorithms that can protect data against quantum attacks. Plus, MIT launched a new database of AI risks. Meanwhile, the CSA published a paper outlining the unique risks involved in building systems that use LLMs. And get the latest on Q2’s most prevalent…
Compromising Microsoft's AI Healthcare Chatbot Service
Tenable Research discovered multiple privilege-escalation issues in the Azure Health Bot Service via a server-side request forgery (SSRF), which allowed researchers access to cross-tenant resources.
Securing the AI Attack Surface: Separating the Unknown from the Well Understood
There's a lot of confusion and unknowns regarding AI application risks, and a lot of vendors are trying to offer solutions to problems that aren't clearly defined. In this blog we explain why a smart approach is to start by focusing on basic, foundational cyber hygiene, adopt well-established best…
Do You Think You Have No AI Exposures? Think Again
As AI usage becomes more prevalent in organizations globally, security teams must get full visibility into these applications. Building a comprehensive inventory of AI applications in your environment is a first step. Read on to learn what we found about AI application-usage in the real world when…
Never Trust User Inputs -- And AI Isn't an Exception: A Security-First Approach
As AI transforms industries, security remains critical. Discover the importance of a security-first approach in AI development, the risks of open-source tools, and how Tenable's solutions can help protect your systems.