Everything is an Event: Questioning Authority with Rocana CTO Eric Sammer

Eric SammerThis is the third in Blue Hill Research’s blog series “Questioning Authority with Toph Whitmore.”

Eric Sammer is one of the co-founders at Rocana. Almost-three-year-old Rocana is a leader in the data operations management space, and San Francisco-based Sammer serves as its CTO. His dev work led to the release of Rocana’s flagship technology, Rocana Ops, and he was one of the driving forces behind the open-sourcing of Rocana’s Osso event-serialization data schema. We spoke recently about Rocana, Osso, and the romantic enterprise ideal of total operational visibility.

TOPH WHITMORE: Let’s start at the beginning. You were leading an engineering team at Cloudera. What drove you and your co-founders to start Rocana?

ERIC SAMMER: Rocana was born out of what customers were already doing with Hadoop and the open-source ecosystem. At Cloudera, we wound up building the same [data ops management] system over and over and over again. We looked at that and said, “Everybody using this technology is trying to solve the same set of problems.” And over time, we captured the commonality.

TW: You saw professional services reinventing the Ops-management wheel with every Hadoop deployment. What was the repeatable piece that became the basis of Rocana technology?

ES: Each of those deployments—at the end of the day, they were all the same architecture. It was a system that brought in fine-grained event data that was domain-specific, then combined that data from different disparate systems in ways that allowed the business to do things that they hadn’t been able to do before, or to do what they did before, but cheaper, faster, or with greater insight.

The high-level use cases were everything from operational analytics to business analytics to ad technology to market-trading systems to optimization systems to fraud detection. But the architecture, the way the analytics worked…it was all the same platform, all the same technology.

TW: How did you start Rocana? Were you in stealth mode for a while?

ES: We took what we had learned as IT operators and as enterprise software and integration experts and asked, “What’s the killer Big Data application?” And we centered on this event operational data warehouse.

We approached some of our past industry contacts and said “You’re trying to build this system. What if you work with us? Let us build the platform and the analytics, and you can get back to banking, or retail, or whatever it is you focus on.” And that led to the initial set of Rocana customers. We built alongside them, and a couple of development partner “alpha” customers converted into GA production customers as we matured.

TW: Rocana takes an event-based approach to operations data. What does that mean and why is it important with regard to data operations management technology?

ES: The notion of the event is powerful…because of its simplicity, its ubiquity across traditional IT. An event can represent logs. It can represent metrics samples. But it can also represent business-level constructs. A user abandoned this shopping cart. Fraud was detected for a particular account.

Really, this notion of events is so simple and ubiquitous. It allowed us to build analytics into the platform, then let us correlate and rationalize across systems. That lets us describe data in a really different way. Not unlike what people on the business side try to do with traditional data warehouses. They’re trying to find the commonality of the schema. In doing that, Rocana has created a data model, a platform, and analytics that can understand and provide a unified view across all of these different facets of the business. Which enables that total operational visibility.

TW: I like the ideal of total operational visibility. Where does it resonate most? You mentioned a shopping-cart data example—Are you seeing uptake in retail? In which verticals have you seen more adoption?

ES: First, There’s a set of horizontal use cases. Take log management, for instance. Or instrumentation of application performance management. Both are ubiquitous, and everyone’s going to have that.

Second, we also see where IT connects to the business, and links the vertical and the horizontal. In the case of retail, this might be understanding how operational events impact online transactions. And so yes, we’re bringing in retail data. In the financial services space, it’s all about trade activity or commercial banking actions, and how they’re impacted by operational data, or how they drive operational events. (Obviously, security is an interesting case there as well.) We have customers today in retail, in financial services, telecommunications, and other verticals.

TW: Help me understand the idealized Rocana ecosystem. I’m an enterprise IT lead. Rocana offers me greater visibility, maybe customer 360 views. What am I deploying architecturally? How does Rocana Ops change my operational model?

ES: Deployment starts with acquiring all of your data feeds or streams…from the networking devices, the application servers and their logs and their metrics. We’re also acquiring data fed from traditional agents, and streams, and other standard sources via pre-built integrations. Rocana ships with that technology out of the box. There’s also the business feeds of data—inventory updates, clickstream data, transactions, orders that are placed, orders that are abandoned, and so forth. We source all of that into this central repository, pulling it all into a standardized schema for representing events.

The analytics on top of that allows us to say “This clickstream has a session identifier that is shared from these operational logs over here.” So now we can join that data. We can say “This transaction was preceded by this series of lead-up events, but here’s the information about the network performance on the port that that application server is attached to.” We’re able to bring all of that information together to diagnose or triage or troubleshoot issues that are happening, or try and predict them before they happen.

TW: You’re bringing—I believe the technical term is “a hell of a lot more data”—into the system. Rocana can scale to accommodate that higher volume of data, but do I get penalized with higher costs if my data grows exponentially?

ES: No. We want to avoid that! We want to encourage you to use the system rather than scare you away.

Yes, Rocana Ops brings in a lot of data, but no, it doesn’t become more expensive for you, the customer. We think about this in terms of the value you’re extracting from the data, vs. charging you for simply acquiring the data. There are a few dimensions to cost. One is the infrastructure it takes to run Rocana Ops. Rocana is able to take advantage of all the advancements in modern (commodity) hardware, as well as cloud. On the licensing side, we align pricing with users of the application, as opposed to the number of nodes or amount of data coming into the system.

TW: In October, Rocana open-sourced its Osso events-serialization data schema. What exactly is Osso, and why did you choose to open-source it?

ES: We’ve extracted the schema—the way that we represent event data within Rocana Ops—and published that as an open-source project. We’ve always used open-source formats, and Osso takes full advantage. We wanted to capture and codify the semantics and the information around those formats, so that it was easier for other vendors to start to modernize how we represent this data, and to make it easier for data integration. But we also wanted customers to be able to use this data for things that we haven’t thought of!

We recognize that Rocana is only a piece of the larger enterprise. For some specialized data-processing, Rocana may be the platform, but not the analytics. We want people to be able to take advantage of that data anyway. By publishing the Osso standard, by creating that schema, and explaining exactly how we store and manage and organize data, people can use third-party tools to build new systems on top of Rocana and the data they’ve collected. Then they can use that as if they had built the system themselves, giving them the benefit of do-it-yourself, without the investment of do-it-yourself.

TW: What’s next?

ES: We’re taking total visibility and turning it into real-time action. Having delivered on our goal of providing a complete, online, real-time view of everything that has happened across every system under management for our customers in earlier releases, we really wanted to focus on turning that data into action. One of the obvious use cases for this is alerting, but under the hood what we’re delivering in our latest release is a full-featured complex event processing system. We also just released Rocana Reflex, a new event alerting and orchestration system that gives ops teams the ability to enact instant and automated reactions based on what’s happening in their environment. When you apply advanced analytics and anomaly detection, you make everything easier to understand with intuitive “first-responder” views. It sounds counterintuitive, but by taking in more information we can make each decision we make—or action we take—smarter. That eliminates false positives and noise by understanding the context in which things occur before we jump to action or alert.

We’ve focused on this three-step approach. Step 1: Collect and create that repository of data that can be used. Step 2: Be able to take action on ALL of that data and make better decisions with it through things like intelligent automation and orchestration. Step 3 and everything that comes after that: Create more intelligence and knowledge about that data to help people understand this enormous haystack. The future is being able to help people find needles within it and enable intuitive, higher-order thinking around them. We’ll focus on horizontal use cases, but you will see us start to target different verticals in earnest. And really help people to get better value out of the data they’ve already collected and they’re already using within Rocana.

About Toph Whitmore

Toph Whitmore is a Blue Hill Research principal analyst covering the Big Data, analytics, marketing automation, and business operations technology spaces. His research interests include technology adoption criteria, data-driven decision-making in the enterprise, customer-journey analytics, and enterprise data-integration models. Before joining Blue Hill Research, Toph spent four years providing management consulting services to Microsoft, delivering strategic project management leadership. More recently, he served as a marketing executive with cloud infrastructure and Big Data software technology firms. A former journalist, Toph's writing has appeared in GigaOM, DevOps Angle, and The Huffington Post, among other media. Toph resides in North Vancouver, British Columbia, Canada, where he is active in the local tech startup community as an angel investor and corporate advisor.
Posted on December 13, 2016 by Toph Whitmore

Leave a Reply

Your email address will not be published. Required fields are marked *