The Evolved SOC Analyst
Moving beyond alert triage: A new era of AI-augmented security operations
Welcome to Detection at Scaleāa weekly newsletter covering generative AI, security monitoring, cloud data infrastructure, and more. Enjoy!
In security operations, institutional knowledge is everything. Analysts build an intuition over the years about what "normal" looks like in their environment, which alerts matter most, and how incidents typically unfold. However, preserving and scaling this expertise has become nearly impossible as our systems grow more complex and our teams face constant turnover.
Enter the AI security analyst. Unlike the chatbots and automation tools that dominated security headlines in prior years, these intelligent agents are purpose-built assistants who can learn your team's investigative processes, maintain perfect recall of historical incidents, and help analysts make better decisions faster. They're not here to replace your SOC team but to create a new kind of security workflow where machines handle the heavy lifting of data correlation and pattern recognition while humans focus on strategic analysis and novel threats.
The concept isn't entirely new. We've had automated playbooks and SOAR workflows for years, but there's a crucial difference: traditional automation follows predetermined paths, while AI analysts can adapt their investigation approach based on context, previous incidents, and the unique patterns of your environment. This enables a fundamentally different kind of collaboration between humans and machines.
As these systems evolve from basic log aggregation to intelligent analysis platforms, we must understand how to work alongside them effectively. This post will explore how AI analysts will transform security operations, what this means for SOC teams, and how to build effective partnerships with these new intelligent assistants.
If you missed the last introductory post on AI agents in SIEM, check it out first!
The New Investigation Workflow
Traditional security investigation resembles a detective building a case across jurisdictions. There's an initial lead - perhaps a suspicious login alert or unusual network traffic - and the supporting evidence lives in many places. Just like a detective can't file charges solely on a witness statement, analysts shouldnāt escalate incidents based on a single data point. Instead, they must build confidence in their findings methodically: endpoint logs confirm the presence of malicious processes, IAM records reveal suspicious access patterns, and network flows demonstrate potential data exfiltration paths. Yet gathering this evidence forces investigators to navigate a maze of different tools and interfaces, spending more time connecting systems than connecting the dots of an attack.
AI analyst agents transform this fragmented workflow into a fluid investigation process flowing through several steps with a clearly defined goal: Resolve this alert and incorporate learnings. Let's walk through a real-world example: investigating a suspicious privileged access attempt. Instead of starting with raw alert data, the AI analyst immediately presents the narrative:
š¤ "User alice.waters attempted to access the production database at 3 AM EST from an unmanaged device. This is unusual for several reasons:
Alice typically works 9-5 EST and hasn't logged in during off-hours in the past 6 months
While she has the required permissions, sheās never accessed this database directly
Her peer group (other developers on the payment team) typically access this data through the internal dashboard
The access attempt came shortly after Aliceās credentials were used to log in from a new IP address in Romania"
This initial assessment draws on months of historical data, peer group behavior patterns, and asset context that would take a human analyst hours to compile and understand. But the real power comes in the interactive investigation that follows.
Rather than jumping between pages and tools, analysts can ask follow-up questions in natural language: "Show me all of Alice's authentication events in the past 24 hours" or "Have any other team members exhibited similar patterns?" The AI analyst maintains context throughout the conversation, understanding that "similar patterns" refer to off-hours database access attempts, not just authentication events.
This creates a fundamentally different kind of investigation flow:
Start with synthesized context instead of raw data
Explore scenarios through natural conversation
Let the machine handle data correlation across sources
Focus human attention on decision-making rather than data-gathering
Most importantly, every investigation becomes a learning opportunity. The AI analyst observes how human analysts approach different scenarios, which questions they ask, and what patterns they find significant. This creates a continuous feedback loop that makes the system more effective over timeāmuch like a junior analyst learning from their more experienced colleagues.
Critically, this learning isn't limited to investigation patterns. Teams can directly inject organizational context that shapes how the AI analyst evaluates situations. Like teaching a new team member, you can provide crucial context like "these admin accounts should never access production outside of change windows" or "engineers in the payments team only interact with customer data through our internal dashboard." This institutional knowledge - often scattered across wikis, runbooks, and senior analysts' heads - becomes part of the AI's analytical framework, helping it understand what "normal" looks like in your unique environment.
Enhancing Human Analysis
The best detectives aren't just good at finding evidence ā they're masters at recognizing patterns from past cases, understanding the local environment, and knowing which questions to ask. The same is true for security analysts. Yet, while human analysts can forget details or miss connections, AI analysts bring perfect recall and consistent pattern recognition to the partnership.
Consider how experienced analysts operate: they develop an instinct for which alerts matter most, which user behaviors are truly suspicious, and what "normal" looks like across different teams and systems. This expertise traditionally takes years to develop and is nearly impossible to transfer. But with AI analysts, we can codify this knowledge and apply it consistently across every investigation with the added benefits of attention to detail and a vast context buffer.
The key is teaching these systems to think like your best analysts. This goes beyond simple playbooks or decision trees. When investigating a suspicious login, a well-trained AI analyst might note: "While this login came from a new IP address, the user followed their typical pattern of checking GitHub notifications and accessing our CI/CD pipeline. This matches the behavior we typically see from our remote engineering team." This kind of contextual analysis comes from understanding not just the technical indicators but also the human patterns that make up your organization.
But AI analysts shouldn't just execute your playbook ā they should help evolve it. Through continuous interaction, they can identify emerging patterns that even experienced analysts might miss:
š¤ "I've noticed an 80% increase in failed MFA attempts from the developer team during off-hours over the past month, correlating with our new deployment schedule."
These insights help human analysts focus their attention where it matters most.
The goal isn't to remove humans from the loop but to create a more effective partnership. Human analysts still make crucial decisions and handle novel situations, but they do so with comprehensive context and pattern recognition that would be impossible to maintain manually. It's like having a partner with perfect memory who can instantly recall every similar case you've ever worked on.
Making Better Decisions Together
Security analysis is ultimately about making decisions: Is this alert a true positive? Should we escalate this incident? Do we need to wake up the infrastructure team at 2 AM? These decisions become exponentially more problematic as our systems grow more complex and attacks more sophisticated. However, the combination of machine pattern recognition and human intuition creates a new kind of decision-making framework.
Consider a daily scenario in SOCs: An alert fires, indicating potential data exfiltration from a production system. A traditional investigation might start with the most recent logs, working backward to understand what happened. But an AI analyst takes a different approach:
š¤ "This upload activity matches an approved data migration pattern we've seen 47 times in the past year from the data science team. However, three deviations make this instance suspicious:
The process is running under a service account that was recently accessed by a user who left the company
The destination IP, while legitimate, hasn't been used in any previous migrations
The data volume is 3x larger than any previous migration"
This analysis combines immediate technical indicators with deep historical context that would be impractical for a human to maintain. More importantly, it surfaces the relevant anomalies that warrant human investigation rather than just raw data that needs interpretation.
The AI analyst can then maintain this context throughout the incident lifecycle. When a senior analyst joins the investigation, instead of reading through a long ticket history, they can ask: "What actions have been taken so far, and what evidence supports our current assessment?" The AI provides a straightforward narrative of the investigation's progression, decisions, and supporting data while maintaining links to the evidence.
This institutional knowledge becomes particularly valuable during complex incidents that unfold over time. The AI analyst can spot subtle connections between seemingly unrelated events: "This failed login attempt shares characteristics with an incident from three months ago that was initially classified as a false positive but later linked to a broader campaign." This pattern recognition across time and data sources helps teams identify sophisticated attacks that might go unnoticed.
The Strategic Security Analyst
The role of the SOC analyst is transforming from data hunter-gatherer to strategic investigator. Rather than spending hours pivoting between tools and manually correlating events, analysts can focus on what humans do best: understanding context, making nuanced decisions, and identifying novel threats that don't fit established patterns.
This shift requires new skills and mindsets. The most effective analysts won't be those who can write the most complex queries or memorize the most IOCs ā they'll be the ones who can:
Ask the right questions to guide AI investigations
Recognize when to trust automated analysis and when to dig deeper
Effectively teach systems about new threats and organizational context
Focus on strategic analysis while letting machines handle routine tasks
Success metrics are evolving, too. Instead of measuring alert closure rates or mean-time-to-detection in isolation, teams should evaluate the effectiveness of the human-AI partnership. Are investigations becoming more thorough? Are we catching sophisticated attacks earlier? Is institutional knowledge being effectively preserved and applied?
The future of security operations isn't about humans versus machines ā it's about creating partnerships that combine the best of both. AI analysts bring tireless pattern recognition, perfect recall, and rapid data correlation. Human analysts bring intuition, strategic thinking, and the ability to understand context beyond pure data.
This partnership will continue to evolve as AI capabilities advance. But the core principle remains: these systems should enhance rather than replace human expertise. We can build more efficient and effective security operations by teaching AI analysts to understand our environments, recognize our patterns, and support our decision-making processes.
The most successful security teams will embrace this evolution, focusing on building strong partnerships between human analysts and their AI counterparts. After all, security is a human problem that requires human insightāwe're just getting better tools to apply that insight at scale.