Managed Detection and Response
Avoiding Too Many Cooks in the Kitchen: Agentic AI Use Cases
April 8, 2025 | 5 min read
Mona Ghadiri
Senior Director of Product Management

AI agents work together to complete tasks. For instance, imagine a kitchen. In a home kitchen, we do everything ourselves, come up with the menu, get the ingredients, peel, chop, sauté, and plate — but in restaurants, everyone has different jobs. Agentic AI brings that restaurant-level skill to organize and optimize to you in your kitchen.
In the case of cybersecurity, agents can provide the necessary backup for what you can’t get to in a day in the SOC. However, there are many companies offering to help you “cook,” leading to a natural conclusion that we may end up with too many “cooks” in the kitchen.
What we want from Agentic AI is focused on skill scalability, its applications and use cases, and blending workflows and expertise together.
However, not all AI agents are built around the same parameters or goals or are successful in achieving their mission — there’s a lot of bad cooks (and use cases) out there. Do you know which ones apply to cybersecurity and are good fits versus those that are mostly hype and lack tangible value? This blog will explore how AI agents are being used in cybersecurity, best uses, and how to pick the right cooks for your kitchen.
Before we talk about the ways you can apply Agentic AI, let’s examine the nine agent types “cooks” that exist presently:
Reactive Agents
These are the simplest form of agentic AI, designed to respond to specific stimuli or changes in their environment. They do not have memory or the ability to learn from past experiences, reacting only to current inputs.
Example: Autonomous bots that navigate around your kitchen and clean up for you in real-time without pre-programming.
Deliberative Agents
These agents use models of the world to plan and execute actions. They can evaluate different scenarios and make decisions based on predicted outcomes.
Example: Deciding what to make for dinner, or purple team AI that considers possible future moves before deciding on its next action.
Learning Agents
Learning agents can improve their performance over time by learning from past experiences. They use techniques like reinforcement learning to adapt their strategies.
Example: Improving recipes you use over time, or AI systems in pen tests that become better opponents by learning from defender actions.
Collaborative Agents
These agents work together with other AI systems or humans to achieve a common goal. They are designed to communicate and coordinate effectively.
Example: Tracking what's in the fridge and what's on the menu, or AI in the SOC that collaborates with other systems to optimize ticket creation.
Mobile Agents
Mobile agents can move across different networks or systems to perform tasks. They are particularly useful in distributed computing environments.
Example: Stove and oven cooking combinations, or network management tools that autonomously diagnose and solve connectivity issues across multiple servers.
Multi-Agent Systems
Comprising multiple interacting agents, these systems can solve complex problems through cooperation. They are used in simulations, robotics, and distributed artificial intelligence.
Example: Different knives with different chopping skills, or swarm robotics where multiple robots work together to complete tasks like cutting a variety of vegetables or search and rescue missions when food starts to go bad.
Autonomous Agents
These agents operate independently, making decisions without human intervention. They often have sophisticated sensors and effectors to interact with their environment.
Example: AI autonomous SOC’s that navigate threats.
Intelligent Personal Assistants
Designed to perform tasks for users, these agents utilize natural language processing and machine learning to understand and execute commands.
Example: Virtual assistants for customer support or meetings.
Cognitive Agents
Cognitive agents emulate human-like thought processes, including reasoning, understanding, and learning. They aim to simulate aspects of human cognition.
Example: AI systems used for diagnosis that reason through complex data to provide recommendations.
Key AI Agents for Cybersecurity Focus
If I put on my cybersecurity human hat, I believe that there are only four that are worth focusing on or testing:
Deliberative Agents
- Complex Problem Solving: Deliberative agents excel in environments where decision-making requires evaluating multiple scenarios and outcomes, like cybersecurity.
- Predictive Capabilities: By simulating future states of the world, deliberative agents can anticipate potential issues and opportunities, leading to more informed and strategic decisions, either during an attack or after an attack, which help blue teams do better next time, or better use the time they have available to stop bad actors.
- Improved Algorithms: Advances in algorithms, such as those used in constraining false positive definitions and radical advances in defining active attacks, have enhanced the ability of deliberative agents to process vast amounts of security data efficiently. Microsoft’s attack disruption capability in the XDR console working with Security Copilot is an excellent example of this.
Collaborative Agents
- Teamwork and Coordination: Collaborative agents are designed to work alongside humans or other agents, sharing information and coordinating actions, which are needed during incident response. This makes them valuable in cybersecurity because multiple teams and structured and unstructured data inputs may be required.
- Enhanced Productivity: By distributing tasks and leveraging the strengths of each participant, collaborative agents can increase overall productivity and efficiency across teams that must collaborate when it matters.
- Interoperability: As systems become more interconnected, the ability of collaborative agents to work across different platforms and technologies is increasingly important, enabling cybersecurity tool integration to evolve past traditional parsers or collector logic as well. The SecOps Tool Agent by BlueVoyant is an example of this.
Personal Assistant Agents
- User Convenience: Personal assistant agents, like Security Copilot Guided Response or other built-in Security Copilot experiences in Entra, Azure, or Intune, offer significant convenience by streamlining routine tasks and providing quick access to information, thus enhancing security users' productivity. The SecOps Tool Agent by BlueVoyant is also capable of interacting like this.
- Natural Language Processing: Improvements in natural language processing have made it easier for users like level 1 analysts or cloud architects to interact with these agents conversationally without cybersecurity expertise, broadening their accessibility and appeal to junior professionals or those orthogonal to cyber.
- Personalization: These agents can gather user preferences over time, offering personalized recommendations to the security environment or suggesting other agents or services that the user may benefit from.
At BlueVoyant, we are focused on infusing our knowledge and skills to scale, which brings us full circle. Cyber operations cannot promise or deliver on all kinds of agentic AI, but our partnership with Microsoft brings us closer than ever. The pieces we have seen resonate with ourselves and our customers envision a place where cyber operations teams have AI experiences embedded where you do your work, ways to collaborate between teams and AI to scale, and ensuring they help you grow your own skills in ways that give you time to explore.
Mona Ghadiri is a senior director of product management at BlueVoyant and a Microsoft MVP.
Related Reading
Microsoft
The Future of Agentic AI
March 24, 2025 | 3 min read
Managed Detection and Response
Using Structured Storytelling for Effective Defense with Microsoft Security Copilot
January 8, 2025 | 2 min read