The TrojAI Approach to Securing AI Models

TrojAI was built on a foundation of both classic cybersecurity and AI safety, resulting in a unique, resilient, and robust approach to securing AI systems.

Max Hennick
12 min

What Is Prompt Injection in AI?

Prompt injection is the deliberate manipulation of an input provided to an AI model to alter its behavior and generate harmful or malicious outputs.

Julie Peterson
7 min

TrojAI Joins Microsoft for Startups Pegasus Program

TrojAI is proud to be part of the invite-only Microsoft for Startups Pegasus Program.

Christian Falco
3 min

Securing AI Apps from GenAI Threats: MongoDB Atlas and TrojAI

AI applications are becoming more common across all verticals as large enterprises seek to optimize their internal, external, and partner use cases.

Christian Falco
8 min

Five AI Security Predictions for 2025

AI security is evolving at breakneck speed. By the end of 2025, the landscape will look vastly different from where we are today.

James Stewart
4 min

Why We Founded TrojAI: Behavioral Risk Is the Biggest Threat to AI Models

My first startup built AI/ML models that analyzed live video to detect the presence of violence in public spaces.

James Stewart
5 min

The 2025 OWASP Top 10 for LLMs

The Open Worldwide Application Security Project (OWASP) is a non-profit organization that offers guidance on how to improve software security.

Julie Peterson
4 min

Top 3 Reasons Why You Need a Firewall for Your AI Applications

Do built-in LLM guardrails provide enough protection for your enterprise when using GenAI applications?

James Stewart
5 min