2025 Gartner® Market Guide for Event Intelligence Solutions
Selector Recognized as a Representative Vendor.
2025 Gartner® Market Guide for Event Intelligence Solutions
Selector Recognized as a Representative Vendor.

On this page

Selector Gets Real with the Packet Pushers Heavy Networking Podcast Team

Until now, the big promise of AIOps has come up short on delivering meaningful AI analysis. While companies have branded tools as AIOps, they are primarily simpler math models to detect metrics that deviate from the norm. 

Selector has broken this trend and changed the minds of many with its true AIOps networking solution, as revealed in episode 707 of Packet Pushers’ Heavy Networking podcast. This weekly show features network engineers, industry experts, and vendors who share practical information about what works and doesn’t work in networking. 

In episode 707, Selector Co-founder and CTO Nitin Kumar and Product Manager Kevin Kamel joined hosts Ethan Banks and Andrew Conry-Murray to discuss Selector’s AIOps approach to network observability and explain how it makes network observability easier for network engineers. If you haven’t listened to the episode yet, keep reading to get the conversation highlights.

Transformation, from operational data to insights

To start, Kamel describes Selector as “a platform to transform your operational data—telemetry—into actionable insights.” He says it establishes a sense of what’s normal for your environment and then points you toward the issues to help you quickly address them. 

Selector surpasses traditional anomaly detection that simply issues multiple alerts and leaves it up to network engineers to sort through the data, logs, and metrics to determine the problem. Kamel explains that Selector collects the data and uses sophisticated machine learning (ML) to analyze that telemetry. From that analysis, anomalies and abnormal conditions surface that are occurring within the telemetry. Then, Selector creates context about what might be happening more broadly within the environment. 

Kumar adds, “Context is important because the information we provide has to be actionable. It needs to tell you what is likely affected and what to do next. … A lot of analysis [goes on behind] the scenes for Selector to make these predictions.”

Augmentation not replacement

Kamel goes on to explain that Selector doesn’t replace your existing tools but augments them by placing Selector on top of your current ecosystem. “We’re really looking to collect all of that data from the existing tools, do our magic, and then surface the insights from your telemetry. This allows us to show quickly … what the platform can do without an extended POC (proof of concept). The idea of ripping and replacing these solutions is just a political type of activity anyway, which we want to avoid.”

Kamel says, “I see us as augmenting the entire monitoring and observability ecosystem. It’s not just that we’re augmenting one type of telemetry or offering one type of insight. It’s that we’re going to collect this data from everything you have—the more data, the better, from our perspective. Then we’re going to add a whole new layer of insight and value to it, effectively breathing new life into the legacy monitoring ecosystem that you tend to find within large organizations.”

True full-stack observability

Digging into Selector’s approach, Kamel highlights the generalized capability of Selector’s platform. “In the market now, you see quite a few full-stack observability offerings. … when you look under the covers, these platforms don’t look at the network. They call themselves full stack, but that’s a misnomer [because] they’re ignoring the delivery of their application to the end user.”

He explains that they’ll either ignore the network entirely or provide a token functionality where a product manager checks a box to mark that they’re collecting basic network information. Kamel says, “It’s not really full stack to me. I started using the term true full-stack observability to represent this capability that Selector can collect the telemetry from the network, all the way up to the application and everything in between. … that’s the novel capability we have.”

Selector goes beyond application performance management (APM). It can understand the relations and dependencies to correlate the inferences and connections without having an engineer build their own solution from the ground up.

Collection, correlation, and collaboration

Under the covers, Selector’s data-driven platform collects the data, correlates and computes metrics and logs into events, and enables collaboration. Kumar breaks down the process.

1. Collect the data

Collection requires a lot of data engineering and moving large bits of data from place to place. This data includes:

  • Simple Network Management Protocol (SNMP) devices, such as log stores
  • Metadata, such as from a configuration management database (CMBD)
  • Workflow tools, such as ServiceNow and PagerDuty

Selector solves the data collection challenges in three ways:

  • First, Selector has a robust collection mechanism that uses Kubernetes to scale out your compute and storage resources elastically. 
  • Second, it collects data across a hybrid environment, pulling together disaggregated data from various repositories either on-premises or in the cloud. 
  • Third, it takes data in any format you have, compiles it, and generates binaries that produce normalized data on the other side.

2. Correlate and compute metrics and logs

Correlation refers to how you take in your data stream and produce a knowledge graph. The data streams fit into categories, such as numbers, metrics, and logs. Selector uses ML to do the correlations, which converts the metrics, establishes baselines, and transforms them into events. Selector does the same process for logs by converting sentences into events using named entity recognition. 

After Selector creates a common currency of events with the metrics and logs, it correlates them, using inference to group the events. Selector then uses correlation algorithms to build a knowledge graph, which produces the insights network engineers need.

Kumar says, “Correlations are always happening in the background. It looks at a sweep of 5–10 minutes in the past, finds a correlation, and keeps moving. It’s doing a continuous sweep of events.” He adds the interval is tunable, but you don’t want it too short, because then you must deal with false positives. “Usually, correlations are discovered and shared in 1–5 minutes.”

3. Collaborate over insights

Kumar emphasizes that the collaboration layer is the most important in network observability solutions like Selector. He says, “It has to be done in a simple way. That’s where our integration with Slack and Microsoft Teams is important because we share the insights in plain English.”

By using plain English, engineers don’t have to learn a new language or go through JSON or cryptic labels to understand what they need to do. Kumar adds, “We use natural language generation frameworks to take internal JSON data and turn it into English output. … We call it a democratization of your observability. It’s no longer a responsibility of the experts; anybody can have access to the insights the system produces.”

Looking ahead

As the conversation concludes, Kamel highlights key developments the Selector team is pursuing:

  • Use case to use Selector to build a digital twin of a network: “[Our customers] want to predict what will happen to workflows when the network goes down using the knowledge graph technology in Selector.”

    Some clients have used Selector in building a digital twin of their network so they can predict which telemetry won’t get routed properly when the network goes down. Selector plans to fine-tune this process for more clients to apply this approach. 
  • Selector Copilot: “A game changer in terms of truly democratizing access to your operational data. … The idea is to level up the conversational interaction model that we offer today.” 

    Selector Copilot uses conversational AI to help triage network issues. For example, users can ask questions like, What’s wrong? What’s the severity of the impact? Why is this happening? Do we have a way to fix this? Selector Copilot can then respond to say when the incident occurred, whether it was fixed, how to fix it, and so on. 
  • True full-stack observability to take telemetry from the network through the application: “You can tie the user experience down through the entire stack and sort of pinpoint exactly where challenges or friction with the user experience is coming from.”

    Selector plans to further develop the platform to deliver on that true full-stack observability concept. The company wants to take telemetry from the network up through the application using AI and ML, which is an approach no other vendor in the market is doing today. 

Catch the complete Episode 707: Getting Real With Selector’s AIOps of the Heavy Networking podcast on the Packet Pushers website. Or stream it on your favorite podcast platform, including Apple Podcasts and Spotify.

More on our blog

The Fragmentation Tax: What Multi-Tool Incident Response is Really Costing You

Here’s a question that sounds simple but isn’t:  When something breaks in your environment, how long does it take your team to agree on what they’re looking at? Not how long it takes to fix it—that’s a different problem. I mean: how long does it take for everyone on the bridge to have the same basic understanding of what’s broken, where it started, and what it’s affecting? If your answer is anything other than “pretty much immediately,” you’ve got a fragmentation problem. And chances are it’s costing you more than you think.  Consider the following scenario: alarms are flooding in. Multiple servers in the data center are unreachable. Applications are throwing connection errors. The war room comes online, and everyone — NetOps, infrastructure, the application team, and systems engineering — joins. Everyone opens their tools, and what do they see?  The network team sees a BGP state change. Peers went down, routes withdrew. Infrastructure sees high CPU alarms on the core router, followed by a line card reset. The server team’s looking at dozens of hosts that lost connectivity simultaneously. The application team sees cascading failures across services that depend on those servers. The NOC pulls up a configuration change that was pushed to the router forty minutes earlier.  So which one caused it?  The silence tells you everything you need to know.  Why Everyone’s Right, and Nobody Knows Why The frustrating part about this type of scenario is that every tool is correct. The BGP flap is real; the high CPU and line card reset occurred. The servers lost connectivity, applications are failing, and a config change was deployed.  But somehow, even with all this data, you still can’t see what’s actually going on.  It’s not that you’re missing information; it’s that the information lives in five different places, and each place is telling you a different story. Every tool in your arsenal did its job effectively, but they aren’t talking to each other.  And that leaves you — at whatever ungodly hour this is happening — tabbing between dashboards, trying to build a timeline in your head, while someone on the bridge asks if you’ve checked whether the change was actually validated in staging.  The problem here is not the complexity of your systems. It’s that your understanding is in multiple different pieces.  The Architecture of Confusion When five engineers look at five different dashboards and come away with five different theories about what’s broken and how to fix it, that’s not a failure of skill. It’s a failure of architecture.  Most monitoring and observability platforms are built around what we consider to be a vertical data model. Data comes in and gets sorted by type. Logs go into the log pipeline, metrics go into the metrics pipeline, so on and so forth. Network events, infrastructure alerts, and application traces each get its own lane, its own schema, its own storage, and its own analytics.  Most platforms can ingest multiple types of data, but each type still lives in a silo. You can set up correlations — match timestamps, trigger alerts when two things happen at once — but those correlations are brittle and predefined. They know “if X, then Y” but they don’t know the why.  That’s the gap.  That’s why five smart people (often times a lot more than five) on a bridge call can look at the same incident and walk away with completely different understandings of what happened. The tools aren’t designed to give you a shared view. By nature, most of your tools are designed to optimize analysis within their own domain. So when something breaks across domains —which is, let’s be honest, most of the time — you’re left stitching the story together yourself. And you have to do it manually, under pressure, while the alarms keep coming in.  What Changes When Data Speaks the Same Language There’s a different way to do this. Instead of organizing data by type, you can organize it by relationship. We call this “Horizontal Data Ingestion”.  Selector doesn’t care if something is a log or a metric or a BGP event or a line card reset. It’s all knowledge, and we ingest it all — network telemetry, infrastructure metrics, application logs, topology data, change records, configuration pushes, even emails if that’s important to you. Then we use patented AI and ML models to figure out how it’s all connected.  We don’t ask you to tag things in advance. We don’t need you to define schemas. We don’t care if your infrastructure spans on-prem data centers, cloud, hybrid environments, or a mix of vendors that nobody planned but everyone has to live with.  We just ingest it. And then we learn it.  The models we use do three things:  The result isn’t five dashboards with five stories (or more). It’s one operational view of what’s actually happening.  When correlation stops being about matching timestamps and starts being about understanding causality, the whole game changes. You stop pointing fingers and start solving problems.  Same Incident, Different Outcome Let’s go back to that scenario. Alarms flooding, servers unreachable, applications failing.  But this time, there’s no war room.  Instead, a Smart Alert hits Slack. One alert. Not dozens of of fragmented notifications across five different tools. The alert shows you everything:  And here’s the part that actually matters: the person who gets that alert understands what happened without needing to pull everyone else into the problem.  They see what broke, where it started, what it’s affecting, and what needs to happen next. If the need to escalate or create a ticket, there’s a button right there to push it to ServiceNow — with all the correlation, context, and causation already included.  No dashboard archaeology.  No manual timeline reconstruction.  No debate about whether this is a network problem or an infrastructure problem.  Just what happened, why, and how to fix it.  Selector isn’t just making incident response faster. We are fundamentally changing how incident response works.  Integrate First, Consolidate Later Look, we

Key Takeaways From the 2025 Gartner® Market Guide for Event Intelligence Solutions

The 2025 Gartner® Market Guide for Event Intelligence Solutions reflects a shift in how IT operations leaders evaluate AI-driven technologies. As AI hype gives way to more practical evaluation, we are seeing a natural departure from broad promises about AI capabilities toward clearly defined use cases and outcomes.  In their research, Gartner reframes the market formerly known as “AIOps platforms” as Event Intelligence Solutions (EIS), emphasizing correlation, context, and response over generic AI claims. While Gartner examines the evolving role of event intelligence in modern IT operations, we have identified five key takeaways in the market guide. This week, we will share Selector’s perspective on how these ideas translate into real operational value.  Selector is proud to have been identified as a Representative Vendor in the 2025 Gartner Market Guide for Event Intelligence Solution. You can read the full report here.  1. The Market is Resetting Expectations Around AIOps What Gartner says:  “The term AIOps has been widely adopted by vendors across multiple IT operations markets, often without a clear definition of, or consensus on, what it entails. This, coupled with the associated AI hype, has led to both confusion and disillusionment among infrastructure and operations (I&O) leaders, whose expectations have not been met.” “The renaming of this market from AIOps platforms to EIS serves to direct focus to the intended domain and set of use cases. Namely, the application of AI, ML and advanced analytics to cross-domain events from monitoring and observability tools to augment, accelerate and ultimately automate response.” Selector’s perspective:  From our perspective, Gartner’s reframing reflects a broader shift in how operations teams evaluate AI in practice. The challenge was never the potential of AI itself, but the lack of clarity around where and how it should be applied to deliver operational value.  Selector was built with this distinction in mind. Rather than positioning AI as a standalone capability, we focus on applying intelligence to a specific operational domain: cross-domain events produced by monitoring and observability tools. The goal is not to “add AI” to operations, but to help teams augment human decision-making, accelerate response, and progressively move toward automation in areas where confidence and process maturity allow. In other words, AI in and of itself is not the end goal; rather, it is a strategic enabler of the desired outcomes.  We believe this approach mirrors Gartner’s emphasis on use cases and outcomes over terminology. By focusing on event intelligence as a defined operational layer — rather than a broad, catch-all AIOps concept — Selector aims to help teams move past abstract AI promises and focus on measurable improvements in how incidents are understood and handled.  2. Event Noise is the Core Operational Bottleneck What Gartner says:  “It is not unusual for larger enterprises to have portfolios of five to 50 tools for monitoring, each creating signals that must be correlated, triaged and responded to by IT operations teams.” “Often cited by I&O leaders as the key, or only, driver for EIS implementation is this ability to reduce event volumes, in extreme cases this can result in a 95%+ reduction in events that require human intervention.” Selector’s perspective:  We think Gartner’s emphasis on event volume highlights a deeper operational issue: most teams are not overwhelmed because they lack alerts, but because they lack context to understand which signals matter and why.  Selector approaches noise reduction as an outcome of correlation and reasoning, not as a standalone objective. By ingesting events across domains and analyzing their relationships, Selector helps teams distinguish between symptoms and underlying issues. Events that are causally related can be grouped and contextualized, allowing operators to focus on what requires attention rather than manually triaging large volumes of disconnected alerts.  This approach reflects the idea that sustainable noise reduction should reduce cognitive load without obscuring important signals. Rather than simply suppressing alerts, Selector aims to help teams understand how events relate to one another, their impact, and where to begin investigating.  3. Correlation and Context Drive Faster Resolution What Gartner says:  “EIS correlate, group and reduce superfluous notifications from monitoring tools, reducing unnecessary human intervention. In addition, events are enriched with additional contextual information relating to, for example, topology, services, owner or priority.” “Events are additionally enriched with contextual information such as associated impacted business services, prior incidents, change records, owner and even suggested resolver group and remediation action. This correlation and enrichment dramatically reduces the time taken to triage, prioritize, assign and ultimately resolve an event.” Selector’s perspective:  The way we see it, speed in incident response comes from shared understanding, not just faster alert handling. Correlation becomes most valuable when it explains how events relate to one another across domains and what those relationships mean operationally.  Selector focuses on building and reasoning over live service topology and dependencies so that events can be interpreted in context. By linking events to affected services, historical incidents, and changes, Selector helps teams move more quickly from detection to probable cause, reducing the time spent manually assembling context across tools and teams.  This approach is intended to support faster alignment during incidents. When operators can see how events connect, which services are affected, and where to begin the investigation, triage and resolution become more efficient and less reliant on ad hoc communication or escalation.  4. GenAI is Useful, But Only When Grounded in Domain Data What Gartner Says:  “EIS vendors have moved quickly to implement large language model (LLM)- and GenAI-based capabilities, the use cases of which are evolving at pace. Natural language summaries of ongoing issues, providing insights into their possible cause, business impact and next steps are targeted at less technical users.” “The next evolution of these capabilities promises to deliver ever more specialized and sophisticated agentic models targeting broader aspects of the event response and remediation process with expectations being set once again toward fully automated remediation.” “Aside from evaluating the accuracy and ability of GenAI to replace human toil, I&O teams are challenged by their ability to adapt their processes and roles

How Agentic AI is Redefining Network Operations

For much of the past decade, many of the most ambitious ideas in artificial intelligence lived primarily in research papers, labs, and long-term roadmaps. Agentic AI was no exception. The concept of AI systems capable of reasoning, planning, and acting autonomously was widely discussed but largely theoretical. But earlier this month, Gartner® released its report The Future of NetOps Is Agentic, reflecting a growing consensus that this has changed. What was once conceptual is now becoming operational.  We have reached an inflection point where AI research is being translated into real-world systems, and nowhere is this more evident than in network operations. Across IT operations, and especially in NetOps, the conversation is shifting from how AI analyzes data to how AI takes action. This marks a fundamental break from decades of human-centered workflows that simply cannot scale with the speed, complexity, and interdependence of modern networks.  For the first time in the history of NetOps, teams are beginning to explore an entirely new “art of the possible.” AI is no longer confined to dashboards, recommendations, or post-incident analysis. Instead, intelligent systems can continuously observe live environments, reason across domains, and act on behalf of operators in near real time. This marks a redefinition of how network operations function.  This week, we are exploring what Agentic AI means for network operations, why it matters now, and what must be in place for it to succeed.  Transitioning from AIOps to Agentic Operations For a number of years now, AIOps platforms (now called Event Intelligence Solutions by Gartner) have focused on applying AI to one of the hardest problems in IT operations: making sense of overwhelming volumes of events and signals. Solutions like Selector have delivered real, measurable value, reducing noise, accelerating root cause analysis, and improving mean time to resolution through event correlation and contextual enrichment.  However, AIOps was never designed to deliver full autonomy. By nature, it relies on AI models for optimized pattern detection, inference, and recommendation, with humans remaining responsible for decision-making and action. The fact that AIOps stops short of full autonomy is not a shortcoming but rather a reflection of the maturity of the AI technologies and operational modes available when these platforms emerged.  Agentic NetOps represents the next logical evolutionary step, made possible only now as advances in AI architectures, reasoning systems, and operational guardrails begin to close the gap between insight and action. The 2025 Gartner® Market Guide for Event Intelligence Solutions reframes this evolution by focusing on event intelligence as the foundation for automation and decision-making. According to Gartner: “Event intelligence solutions apply AI to augment, accelerate, and automate responses to signals detected from digital services.” The framing around this is critical, and our take is that AI must first understand before it can act. That understanding requires unified events, cross-domain context, and causal reasoning — all of which are capabilities that must precede any form of safe autonomy.  Gartner’s 2026 research report, The Future of NetOps is Agentic, highlights this natural progression: response-focused AI (simple AI chatbots) gives way to task-focused AI (AI assistants), which finally evolves into goal-focused AI (Network AI agents). In other words, Event Intelligence (formerly known generally as AIOps) lays the foundation. Agentic AI then builds on that foundation to introduce systems to go beyond recommending actions and instead continuously reason about the environment and execute on behalf of operators.  What makes AI “Agentic” in NetOps? Agentic AI differs fundamentally from chatbots or task-based assistants. Rather than responding to prompts or executing predefined workflows, agentic systems operate with:  In practical terms, this means AI agents can monitor live networks, detect emerging issues, investigate root cause across domains, and initiate remediation — often faster and at greater scale than human teams.  Gartner notes that generative AI is accelerating this shift by enabling natural language interaction and deeper contextual reasoning: “EIS vendors have moved quickly to implement large language model (LLM)- and GenAI-based capabilities…These capabilities will increasingly be enhanced with retrieval-augmented generation (RAG) or fine-tuning to provide improved context and reduce the risk of hallucinations and inaccurate findings.” Gartner also asserts that: “The next evolution of these capabilities promises to deliver even more specialized and sophisticated agentic models targeting broader aspects of the event response and remediation process with expectations being set once again toward fully automated remediation.” Why Agentic AI is inevitable for network operations Modern networks are no longer static infrastructures. They are dynamic systems spanning cloud, data center, edge, and SaaS, producing massive volumes of telemetry and events every second. Human-centered operations models simply cannot keep pace.  Gartner highlights the operational pressure facing I&O teams:  “Many IT operations teams fail to realize the full potential of event intelligence solutions, realizing a limited value beyond event correlation and noise reduction.” At Selector, we believe the next leap forward comes from closing the gap between insight and action. Agentic AI enables:  In this model, humans are no longer “in the loop” for every decision, but remain firmly “on the loop”, defining intent, guardrails, and trust boundaries. The Prerequisites for Agentic NetOps Agentic AI cannot be bolted onto fragmented tooling or poor data. Gartner repeatedly emphasizes that value depends on data quality, process maturity, and organizational readiness:  “The efficacy of event intelligence solutions is directly related to the sources and quality of data available for ingestion and analysis.” From our perspective, successful agentic operations require:  Without these foundations, autonomy increases risk rather than reducing it. Selector’s Perspective: Agentic AI as a Capability, Not a Feature One of the biggest risks in the current market is superficial “agent washing”, where vendors rebrand chat interfaces or scripts as autonomous intelligence. Gartner warns against this hype-driven approach, noting that AI must be evaluated by its use cases and outcomes, not by terminology.  Selector views Agentic AI not as a single feature, but as a capability that emerges from mature event intelligence. When AI has access to high-fidelity signals, rich context, and causal reasoning, agentic behavior becomes both possible and safe.  This is why Selector has

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.