All posts

Securing AI Applications in the Enterprise: Three Lessons Learned

Vadym Babiuk
Senior Software Engineer
Table of Contents

LLMs conquer the enterprise

In late 2022, the world was taken by storm when OpenAI released ChatGPT. While Large Language Models (LLMs) existed before ChatGPT, ChatGPT caught the attention of the public and media with its broad capabilities and user-friendliness. Millions of people around the world started leveraging LLMs for help with everyday tasks - from quickly looking up cooking recipes to drafting emails to getting help with schoolwork. Enterprises were not far behind. 

Within six months of ChatGPT’s release, it became evident that generative AI (GenAI) technology was going to transform the workplace. Large enterprises quickly entered the LLM adoption race. 

Two years later, in 2025, LLMs are a foundational element of the enterprise technology stack. Large enterprises build applications on top of LLM Application Programming Interfaces (APIs), leverage various coding and work copilots, and give their employees access to AI assistants to boost productivity. For example, both JPMorgan Chase and Goldman Sachs gave their employees access to AI assistants in 2024 and 2025, respectively.

With great power comes great responsibility

As with any disruptive technology, GenAI introduces a new set of unique security and safety challenges that enterprises have to consider while innovating. 

AI security is not a new concept. For example, one of the first taxonomies of adversarial attacks on machine learning systems came out in 2006. However, LLMs have brought AI security into the spotlight. 

Threat taxonomies and security frameworks for LLMs such as OWASP Top 10 for LLMs and MITRE ATLAS were developed quickly, and enterprises started looking for ways to secure their LLM applications. Security officers who are tasked with ensuring safe adoption of GenAI in the enterprise now find themselves in a challenging position. How can they secure a powerful technology with both known and unknown risks without stifling innovation?

At TrojAI, we have developed an AI security platform that helps enterprises do exactly that. TrojAI integrates guardrails and protections for LLM behavior into enterprise GenAI infrastructure without affecting the user or developer experience. Having deployed our products to multiple enterprise environments over the last several years, we have learned many valuable lessons about LLM security in the enterprise. We are sharing some of these lessons in this post.

Lesson #1: Best-of-breed matters

Working with large enterprises, we find that decision-makers selecting any new security platform struggle with choosing between a best-of-breed approach versus a best-of-suite approach. Best-of-breed refers to multiple deeply specialized solutions that each secure a narrow aspect of security, usually from different vendors. Best-of-suite is selecting one solution from one vendor that covers many broad aspects of security. 

The reality is that there are no best-of-suite solutions for AI security today. 

We see some traditional security providers that offer broad coverage trying to extend into AI security, but this approach does not address the unique security challenges of securing AI or the speed with which this technology is changing. Enterprises must consider the breakneck pace of innovation in the generative AI space, including the discovery of new security risks, as well as the distinct limitations of AI when securing these systems.

Depth of AI security coverage and deep expertise matter. Vendors offering best-of-breed solutions have the right tools and knowledge to react to new vulnerabilities and roll out updates to their security products fast. Because best-of-breed vendors focus on this problem, they have become experts in this space.

Enterprises cannot afford to rely on surface-level protection when securing their AI models and applications. Expertise matters.

Lesson #2: Find the right partner

After seeing multiple enterprise LLM implementations, we have learned that while many AI security risks are universal, each enterprise faces challenges that are specific to them. 

For example, a public-facing chatbot might need to be secured against toxicity, prompt injections, and denial-of-service attacks. At the same time, an internal document summarization service that uses a third-party LLM API needs to be extra resilient to prevent personally identifiable information (PII) leakage. Though some protections are more or less universal, AI security experts must always consider use case context when securing LLM-powered applications. 

Because your organization will have use cases that call for unique security actions, it is important to choose an AI security vendor that is willing to partner with you to ensure your success.

The right AI security vendor is the one who not only has deep knowledge of AI security but will also provide your organization with the best guidance and support possible to address the one-of-a-kind security challenges that your enterprise faces.

Work with a vendor who will partner with you so that you are better positioned for future innovation and prepared for the associated risks.

Lesson #3: Pick an enterprise-proven solution

Choosing the right security solution in large organizations can be a lengthy process. Getting approvals to use the newest LLMs, procuring GPU resources, or getting a go-ahead for a new LLM-based application can take weeks or months. 

Choosing the right AI security vendor requires both patience and foresight. However, in the world of AI security, things can change rapidly. A newly discovered vulnerability may require urgent implementation of protective measures, or a new use case might emerge, requiring swift reconfiguration of the protections in place. Given the large user base these decisions often impact, the solution must balance steady progress with the flexibility to act quickly when necessary.

Finding an AI security vendor with a proven track record in the enterprise is essential. You need a partner who can scale to meet the needs of complex environments but also has the agility to respond to emerging risks. Achieving the right balance between careful planning and swift action is crucial for ensuring AI security and safety.

Make sure you select a vendor with known expertise and success in securing large global enterprises.

Securing AI applications in the enterprise

We know first-hand that securing AI for large enterprises is a challenging yet rewarding task. The generative AI space is moving fast, which means that security professionals must keep the same pace. The space is young and will definitely see more breakthroughs and disruptive changes, to which enterprises need to quickly adapt. 

Leveraging generative AI securely does not mean slowing down innovation. As we have learned from our experiences working with large global enterprises, adopting new technology and securing it go hand in hand, are complementary to each other, and can ensure that organizations fully benefit from the technology.

How TrojAI protects AI models and applications

Our mission at TrojAI is to enable the secure rollout of AI in the enterprise. We are a comprehensive AI security platform that protects AI/ML applications and infrastructure. Our best-in-class platform empowers enterprises to safeguard AI applications and models both at build time and run time. TrojAI Detect automatically red teams AI models, safeguarding model behavior and delivering remediation guidance at build time. TrojAI Defend is a firewall for AI that protects enterprises from real-time threats at run time. 

By assessing the risk of AI model behaviors during the model development lifecycle and protecting model behavior at run time, we deliver comprehensive security for your AI models and applications.

Want to learn more about how TrojAI secures the largest enterprises globally with a highly scalable, performant, and extensible solution?

Visit us at troj.ai now.