Topics of Interest Archives: Business Intelligence

New Revenue Opportunities at the Intersection of IoT and Analytics

IoTAnalyticsHere at Blue Hill, more and more of our time is spent exploring implications associated with the Internet of Things. Naturally, there has been a great deal of buzz around IoT analytics.

We’ve had the chance to speak with a number of companies who are doing some truly fantastic things with IoT. This includes everything from farms more efficiently watering their crops to cities optimizing traffic patterns – all of which speak to the potential value that analytics can bring on top of a world of connected ‘things’.

In the IoT gold rush, business leaders should not just ask themselves if there are opportunities to build top-line growth from bolstering existing revenue streams; they should also be asking how they can grow top-line performance from net-new revenue streams.

One organization I spoke with, an industrial truck manufacturer, shared insights that the rest of us should learn from. As part of their IoT initiatives, they first began collecting data on their trucks to identify common areas of complications. This eventually evolved into the ability to provide predictive maintenance on their trucks. As part of their warranty support, they now identify imminent problems and schedule service appointments before a breakdown ever happens. This represents a series of meaningful incremental improvements that not only save time and money, but improve the quality of their trucks over time.

However, the manufacturer took a step forward in realizing this service would be highly valuable to their customers as well. They now offer the ability to preemptively identify issues and schedule service appointments as a subscription service to their customers.  In short, truck buyers can purchase this value-added service on an ongoing annual basis.  The result is a better-serviced customer and an entirely new (and recurring) revenue stream.

Any producer of highly valuable capital equipment has a direct lesson to learn from this quick example. The right blend of analytics and IoT enables a new business model – “predictive maintenance as a service.” This is not just an internal business model to improve operational efficiency and maintenance, but an external and client-facing business model that can translate Big Data into Big Money.

More broadly, the lesson is around the usefulness of the vast amount of data that organizations can collect cheaply thanks to the plummeting costs of sensors and data processing power. If the data is valuable in one context, there is at least a chance that it is valuable in another as well.

Business leaders must continue to recognize the transformational shift as every company, whether in manufacturing, retail, or construction, is becoming data centric.  As our abilities to collect and process data about every aspect of our business increases exponentially, we should always be looking for opportunities to repackage the data to drive new and innovative revenue streams.

What new business models do you see from the intersection of IoT and analytics? Join the conversation on Twitter (@James_Haight) or feel free to email me directly (jhaight@bluehillresearch.com) with your thoughts.

Posted in Analytics, Blog, Internet of Things, Research | Tagged , , , | Leave a comment

Why Your Data Preparation and Blending Efforts Need a Helping Hand

Data Blender Blog PictureIn past blog posts, we talked about how data management is fundamentally changing. It’s no secret that a convergence of factors – from an explosion in data sources, innovation in analytics techniques, and a shifting decentralization of analytics away from IT – all create obstacles as businesses try to invest in the best way to get value from their data.

Individual business analysts are encountering a growing challenge as the difficulty of preparing data for analysis is expanding almost as exponentially as the data itself. Data exchange formats such as JSON and XML are becoming more popular, and present a difficult task to parse and make useful. Combined with the vast amounts of unstructured data held in Big Data environments such as Hadoop and the growing number of ‘non-traditional’ data sources like social streams or machine sensors, getting data sources into a clean format can be a momentous task.

Analyzing social media data and its impact on sales sounds great in theory, but logistically, it’s complicated. Combining data feeds from disparate sources is easier now than ever, but it doesn’t ensure that the data is ready for analysis. For instance, if time periods are measured differently in the two data sources, one set of data must be transformed so that an apples-to-apples comparison can be made. Other predicaments arise if the data set is incomplete. For example, sales data might be missing the zip code associated with a sale in 20% of the data set. This, too, takes time to clean and prepare.

This is a constant challenge, and one that is exacerbated at scale. Cleaning inconsistencies in a 500-row spreadsheet is one thing, but doing so across millions of rows of transaction logs is quite another.

A certain level of automation is required to augment the capabilities of the analyst when we are dealing with data at this scale. There is a need for software that can identify the breakpoints, easily parse complex inputs, and pick out missing or partial data (such as zip codes) and automatically fill it in with the right information. Ultimately, the market is screaming for solutions that let analysts spend less time preparing data and more time actually analyzing it.

For all of these reasons, it is no surprise that a number of vendors have come to market offering a better way to prepare data for analysis. Established players like MicroStrategy and Qlik are introducing data preparation capabilities into their products to ease the pain and allow users to stay in one interface rather than toggle between tools. Others, like IBM Watson Analytics and Microsoft Power BI, are following a similar path.

In addition, a number of standalone products are ramping up their market presence. Each offers deeply specialized solutions, and should provide a much-needed helping hand to augment data analysts’ effort.  At Blue Hill, we have identified Alteryx, Informatica Rev, Paxata, Tamr, and Trifacta as our five key standalone solutions to evaluate. (For a deeper analysis of each solution and a further look at market forces in general, be on the lookout for our upcoming research report on the subject.) These products represent a new breed of solutions that emphasize code-free environments for visually building data blending workflows. Further, the majority of these solutions leverage machine learning, textual analysis, and pattern recognition to automatically do the brunt of the dirty work.

As a forward-looking indicator to the promise of the space, venture capital firms have notably placed their bets. Most recently, Tamr announced $25.2 million in funding this week, and Alteryx landed $60 million in funding late last year. This is a validation of what data analysts already know: the need for scalable and automated data blending and preparation capabilities is gigantic.

Posted in Analytics, Blog, General, General Function, General Management | Tagged , , , | Leave a comment

Solving BI When IT Has Too Many Bosses

MindTheGapIt’s no secret that there is an often-uneasy tension between IT and line-of-business employees. The list of tension points can range from business feeling that IT is slowing them down to IT feeling exasperated by business’s unwillingness to work with the tools they provide them.

This gap in understanding has been increasingly illuminated in the realm of data analytics and business intelligence (BI). The explosion in popularity of visual data discovery tools such as Tableau and Qlik, as well as the proliferation of self-service access to enterprise data, has blurred the lines defining what responsibilities fall where. But if we are to address the issue, it is helpful to understand where it is coming from.

Broadly, Blue Hill has observed a decentralization of data analysis. More and more employees in the line of business have the autonomy to slice, dice, and otherwise analyze corporate data without the intervention of a central IT organization. This has many potential advantages, not the least of which is removing a significant burden of filling requests off of IT’s shoulders and getting business the answers they need faster.

But providing employees the flexibility and speed they desire with the security and consistency IT requires is easier said than done. One of the leading points of tension comes from the fact that in today’s world IT has more “bosses” than it ever had in the past … many more.

The CIO’s office has always been accustomed to working for the CFO. Reporting on data associated with the financial performance of the company is at the top of every business’s to-do list. With this in mind, the CIO and the CFO are a natural partnership that came of age during BI’s initial ascent from relevancy to necessity. As such, the two have learned how to peacefully coexist to ensure that the company runs smoothly.

But finance is no longer unique as a data-driven business function. Whether it’s HR, quality control, or sales, data is the lifeblood of decision-making in almost every aspect of operations. This means that IT must help support BI and analytics throughout a much broader swath of functions – with each having its own unique needs for data and analysis.

In the modern data-driven enterprise, IT now has a lot more people to make happy as they support data initiatives. In short, IT has a significantly greater amount of people that they must support, and each side of the equation is still working out how to work together.

There is perhaps no more publicized rift than that between the CIO and the Chief Marketing Officer. At its core, this is because the marketing function has undergone an extraordinary revolution. Data lies squarely at the heart of modern marketing. Success in the world of branding and perception has come to be quantified in terms of  “open rates,” “conversions,” “impressions,” and “shares”.

Data and analysis in marketing can lead to more effective campaigns, cost savings, and of course more sales. As such, CMOs have demanded access to data analytics, regardless if the IT organization is prepared to support them or not. Analytics tools are built and sold directly to CMOs, meaning that budget for and domain expertise in data analysis no longer resides exclusively in the IT department. CMOs must be data-savvy individuals, and many CIOs are not comfortable with the secession in power that this might mean. If we extend this beyond the CMO’s office to the other functions throughout a company, we can see how this feeling might be compounded.

Ultimately, building silos of expertise or data within a company is counterproductive and inefficient. Shared services around access to centralized and trusted sources of data are essential to avoid inconsistent underlying data that could compromise confidence in any findings. As much as analytics is now driven by the individual business use cases, providing shared services to make analysis consistent and trusted across an entire company is firmly in the realm of IT. Security, governance, and consistency can be managed by business on small scales of a few employees. But it is important to remember: if something is effective, it will spread. This means that small implementations will grow, and when they do, things will break down without an IT-managed backbone.

So what steps can you take to unite the perspectives of both IT and the line-of-business as analytics implementations grow beyond just a few employees? Blue Hill is about to release a new research study, “Closing the IT and Line-of-Business Gap,” in which we held deep qualitative interviews with a number of companies that encountered these challenges. In the report, we outline techniques and success stories of companies that have managed to unite these perspectives even when scaling their analytics implementations across thousands of end users.

To make sure that you get this report, please subscribe to my research. Don’t hesitate to contact me if you have any questions.

subscribe2James_button

Posted in Analytics, Blog, General Function, General Industry, Human Resources, IT & Infrastructure, Marketing, Operations, Research, Sales | Tagged , | 1 Comment

Fundamental Shifts in Information Management

Hadoop Elephant Latte (Source: Yuko Honda, Flickr)As market observers, we at Blue Hill have seen some big fundamental changes in the use of technology, such as the emergence of Bring Your Own Device, the progression of cloud from suspect technology to enterprise standard, and the assumption of ubiquitous and non-stop social networking and interaction. All of these trends have led to fundamental changes in our assumptions of technology usage, and brought market shifts where traditional players ceded ground to upstarts or new market entrants.

Based on key market trends that are occurring simultaneously, Blue Hill believes that the tasks of data preparation, cleansing, augmentation, and governance are facing a similar shakeup where the choices that enterprises make will fundamentally change. This key market shift is due to five key trends:

- Formalization of Hadoop as an enterprise technology
- Proliferation of data exchange formats such as JSON and XML
- New users of data management and analytics technology
- Increased need for data quality
- Demand for best-in-breed technologies

First, Hadoop has started to make its way into enterprise data warehouses and production environments in meaningful ways. Although the hype of Big Data has existed for several years, the truth is that Hadoop was mainly limited to the largest of data stores back in 2012, and enterprise environments were spinning up Hadoop instances as proofs of concept. However, as organizations have seen the sheer volume of relevant data requested for business usage increase by an order of magnitude, between customer data, partner data, and third-party sources, Hadoop has emerged as a key technology to simply keep pace with the intense demands of the “data-driven enterprise.” This need for volume means that enterprise data strategies must include both the maintenance of existing relational databases and the growth of semi-structured and unstructured data that must be ingested, processed, and made relevant for the business user.

Second, with the rise of  APIs, data formats such as JSON and XML have become key enterprise data structures to exchange data of all shapes and sizes. As a result, Blue Hill has seen a noted increase in enterprise requests to cleanse and support JSON and other semi-structured data strings within analytic environments. Otherwise, this data remains simply as descriptive information rather than analytic data that can provide holistic and enterprise-wise insights and guidance. To support the likes of JSON and XML without simply taking a manual development approach, enterprise data management requires investment in tools that can quickly contextualize, parse, and summarize these data strings into useful data.

Third, it’s hard to ignore the dramatic success of self-service analysis products such as Tableau and the accompanying shift in user’s relationship with data.  Also, it’s important to consider the nearly $10 billion dollars recently spent to take two traditional market leaders, TIBCO and Informatica, private. Users of data management and analytics technology have spread beyond the realm of IT, and are now embedded into the core function of roles within various business groups. Traditional technology vendors must adapt to these shifts in the market by focusing on ease of use, compelling future-facing roadmaps, and customer service. With the two largest independent data management players going private, the world of information management will most likely be opened up in unpredictable ways for upstarts that are natively built to support the next generation of data needs.

Fourth, companies are finally realizing that Big Data does not obviate the need for data quality. Several years ago, there was an odd idea that Big Data could stay dirty because the volume was so large that only the “directional” guidance of the data mattered and that the statistical magic of data scientists would fix everything. As Big Data has increasingly become Enterprise Data (or just plain old data), companies now find that this is not true, and, just as with every other computing asset, garbage in is garbage out. With this realization, companies now have to figure out how to refine Big Data of the past five years from coal to diamonds by providing enterprise-grade accuracy and cleanliness. This requires the ability to cleanse data at scale, and to use data cleansing tools that can be used not just by expert developers and data scientists, but by data analysts and standard developers as well.

Finally, the demand for best-in-breed technologies is only increasing with time. One of the most important results from the “there’s an app for that” approach to enterprise mobility is the end user’s increasing demand to instantly access the right tool at the right moment. For a majority of employees, it is not satisfactory to simply provide an enterprise suite or to take a Swiss Army knife approach to a group of technologies. Instead, employees expect to switch back and forth between technologies seamlessly, and they don’t care whether their favorite technologies are provided by a single vendor or by a dozen vendors. This expectation for seamless partnership either forces legacy vendors to have a roadmap to make all of their capabilities best-of-breed, or to lose market share as companies shift to vendors that provide specific best-in-class capabilities and integrate with other top providers. This expectation is simply a straightforward result of the increasingly competitive and results-driven world that we all live in, where employees desire to be more efficient and to have a better user experience.

As these market pressures and business expectations all create greater demand for better data, Blue Hill expects that the data management industry will undergo massive disruption.  In particular, data preparation represents an inordinate percentage of the time spent on overall analysis. Analysts spend the bulk of their workload cleaning, combining, parsing, and otherwise transforming data sets into a digestible input for downstream analysis and insights. As organizations deal with an increasingly-complex data environment, the time spent on getting data ready for analysis is expanding and threatening to overwhelm existing resources. In response to these market forces, a new class of solutions have emerged that are focused on the data “wrangling” or “transformation” process. These solutions leverage machine-learning, self-service access, and visual interfaces to simplify and expedite analysts’ ability to work with data even at the largest of scales. Overall, there is an opportunity for IT orchestrators to bring this new category of tools into their arsenal of best-in-breed solutions.

See Related Research

Companies that are ready to support this oncoming tidal wave of data change will be positioned to support the future of data-driven analysis and change. Those that ignore this inexorable set of trends will end up drowning in their data, or losing control of their data as growth outstrips governance.

Posted in Blog, General, General Function, General Industry, General Management, IT & Infrastructure, Research | Tagged , , , | Leave a comment

What U.S. Agriculture and the Big Data Skills Gap Have in Common

Big Data Skills GapIt’s estimated that by 2018, the U.S. will be facing a shortage of ~1.5 million managers who are able to work with Big Data outputs, and that we will need an additional 140,000 – 190,000 more workers that are deeply adept at Big Data analytics. Researchers, pundits, and policy makers alike have sounded the alarm bells that we are facing a dire talent shortage. Indeed, many have taken to calling it the “Big Data skills gap.”

Are these numbers meaningful? Yes. Are these numbers cause for panic or concern? Absolutely not.

The “Big Data skills gap” lends itself nicely to news articles and sound bites, but it is short-sighted. Yes, pushing colleges to offer more data analytics courses or training broad swaths of young employees is helpful, but only to a certain point. What people forget is that there are two levers that we can pull to address any skills gap: education and technological innovation.

The demand for data analytics is enormous, but if we cannot get the human expertise to meet this demand, we will find a way to automate it. This is a trend that is in no way unique to just data analytics. Consider, for instance, the history of agricultural production. The U.S. Department of Agriculture tells us that in 1920 America had approximately 32 million people living on farms supporting a population of 105.7 million. In other words nearly 30% of the population lived on farms. Contrast that to today where only 2% of America’s 300+ million people live on farms, and less than 1% of the total population actually claims farming as an occupation. The story is clear: we have far fewer farmers producing far more food. The answer to meeting the country’s ballooning demand for food was technology, not manpower. 1

Expect more of the same in the realm of big data analytics. Solution providers are speeding headlong to a future where machine learning and automation are replacing tedious and/or specialized tasks currently reserved for highly-skilled employees. In short, software vendors’ aim is to place a layer of abstraction between the user experience and the underlying commands required to manipulate and analyze data.

That is to say, users can “point-and-click” and “drag-and-drop” in place of writing scripts and query language. In doing so, modern analytics solutions are smashing traditional technical barriers to adoption. Not only can more users now perform complex analytical analysis, but organizations need less expertise in-house to accomplish the same objectives.

A number of vendors are pushing the envelope forward here, and it’s coming from all sides of the data analytics equation. On the data management and preparation side, solutions like Trifacta, Paxata, Datawatch, and Informatica Rev are challenging old perceptions of what is required to cleanse huge amounts of data. On the advanced analytics end of the spectrum, RapidMiner, Alpine Data Labs, and Alteryx have code-free interfaces that extend the option of performing complex tasks to non-data scientists. (It should be noted that Alteryx is an effective data blending tool as well). The core business intelligence space has seen this trend play out to the greatest extent, something I’ve covered at length in in previous blog posts.

Hadoop environments are still notoriously opaque to those who are not highly specialized, but this is a product of the comparative infancy of the technology. Rest assured that as machine learning and user experience innovations continue, managing Hadoop will follow the same trend line.

See Related Research

Although not a pure-play data analytics company, Narrative Science gives us a fascinating glimpse into what might be possible in the future. Currently, users can turn vast tables and streams of data instantly into coherent prose. Narrative Science’s Quill takes a collection of data and turns it into a summary of the relevant highlights and trends as if a writer was hired to perform the same task (only it does so exponentially faster).

I should be clear that data scientists and data analytics skills are not becoming obsolete. Rather, where they will be applied is shifting higher and higher on the value chain. Routine and repeatable tasks will continue to be automated, while human talent will be applied to innovative and complex operations. If we go back to the US agriculture analogy, we can see how human energy has shifted from performing tasks such as plowing or planting to areas such as operating machinery or, even further up the value chain, to efforts aimed at tactically hedging against weather and market conditions. Just as in farming, when it comes to producing greater amounts of analytical yields, we will be able to continually accomplish more with less.

Yes, the skills gap shows us that demand for analytics skills has outpaced the supply of talented workers, but that is the nature of innovation. Software suppliers and soon-to-be entrepreneurs will rise to the occasion and tackle the hardest problems of human scale limitations so that we don’t have to.

 

1. A counterpoint could be that U.S. farmers have dwindled because we are now better able to import food from other countries. While this is true, the U.S. currently exports more food than it imports.

Posted in Analytics, Blog, General Function, General Industry, Research | Tagged , , | Leave a comment

Birst Announces Tech Partnership with Tableau? What's Going On?

On April 15, 2015, Birst announced a technology partnership with Tableau in which Birst’s BI platform will connect via ODBC to Tableau, allowing Birst users to directly connect Birst to Tableau, and for Tableau customers to directly connect to Birst. This may come as a bit of a surprise for customers who have previously considered both Birst and Tableau in competitive settings for business intelligence purchases, and for existing customers of both vendors, who may have considered both of these vendors to be similar in nature. So, why does this announcement make sense?

Although both of these companies have been lumped into the “BI Platform and Solution” bucket by a number of analyst firms, Blue Hill believes that these companies are actually quite different both in their focus and in their core value propositions. Typically, Blue Hill has found that when Birst and Tableau are head-to-head in a deal, the decision ends up being relatively straightforward because these vendors differ significantly in their strengths and weaknesses.

Although both companies are rising stars in their own regard, end users must understand how both of these vendors are actually quite different in their approach to supporting business intelligence. This technical partnership is a reflection of the fact that Birst and Tableau, although typically seen as BI competitors, can also be used in a single analytics environment. To understand how this works, consider how both Birst and Tableau have evolved over the past several years.

Birst and Two-Tier Data Architecture

Although Birst has its own visualization capabilities, as well as a predictive visualization wizard in Birst Visualizer, its greatest strength is actually as a bridge to bring together legacy data sources and data warehouses with emerging needs for new datamarts and cloud-based data into a single destination for corporate BI. This bridge occurs through Birst’s user-level data tier, which brings in a unique data view for each user within an organization based on the combination of internal and third-party data sources that an enterprise user may need.

See Related Research

By allowing each user to access and shape their own data environment based on individualized needs, and then serving as the intermediator to update enterprise data environments, Birst ends up being the key traffic director between individual data explorers and the enterprise truth. Because the social, mobile, and cloud technology paradigm has led to a splintering of data sources and data analysis, companies now must figure out how to put all the pieces together again. It’s a Humpty Dumpty problem where all the king’s horses and all the king’s men must put Humpty Dumpty together again, or risk losing basic visibility into key business processes. This is where Birst has a strong opportunity to support individual analytic choices and tie each individual’s actions to an enterprise environment.

Tableau and Data Discovery

Over the past few years, Tableau has instigated a new arms race in visualization where cloud BI vendors such as Birst and GoodData; standalone players such as Qlik, SAS, Information Builders, and MicroStrategy; and megavendors such as Oracle, IBM, Microsoft, and SAP all had to improve their visualizations and data discovery capabilities. The BI market can thank Tableau for creating a new standard that every other vendor had to match.

Now, companies doing their due diligence in BI typically know who Tableau is. The next step is for Tableau to increase the size and scale of its data discovery environments, which is most obvious through Tableau’s Drive methodology designed to “scale a culture of analytics.” This culture challenge has been one of the greatest contests in business intelligence. Although a fair number of enterprises have developed a BI center of excellence, and vendors have become increasingly flexible with their licensing approaches, the real difficulty to adoption has been the complexity of building true IT-business partnerships to support analytic environments. This is the predicament that Tableau has sought to tackle with the Drive approach and then to support with its own software and technology partners.

Birst and Tableau: Two Roads Diverged

This partnership reflects Birst and Tableau’s diverging paths in the enterprise BI world. Although both vendors will still find themselves competing against each other in specific deals, Blue Hill believes that this partnership is a good example of how each company is pursuing its strengths.

The reality is that datamarts and basic reporting have been around for decades. If the traditional methods of supporting these needs were good enough, in and of themselves, neither Birst nor Tableau would have ever taken off. But to reach the next level of integration with legacy enterprise environments, Birst and Tableau now can work together, at least on a technical level.

For business leaders, this partnership should help paint a clearer picture of their organization’s data analysis roadmap. The relative strengths of each solution illuminate the need for a combined approach. Ultimately, in order to ensure meaningful top-level data exploration, you must first ensure that the underlying data is trustworthy and complete. To build a full stack of data analysis capabilities, the choice was never really one or the other; it was combining the complimentary aspects of each. Now, with this partnership, the choice has become a little easier.

Posted in Analytics, Blog, Finance & Accounting, General Function, General Industry, General Management, IT & Infrastructure, Marketing, Research | Tagged , , | Leave a comment

The Business Analyst’s Next Frontier: Advanced Analytics

The Next Frontier in BI We have posited in prior blogs about the increasing ubiquity of features and functionality that were once differentiators in the BI and analytics space. That is to say, cloud, mobile, and self-service access to reports and dashboards has moved from the realm of unique competitive advantages to table-stakes.

While certainly there are battles yet to be won on pricing, user interface, extensibility, data management, customer service, and a host of other factors, one of the areas of most intense competition is happening around advanced analytics. The term “advanced analytics” itself can encompass a variety of undertakings from geospatial analysis, graph data analysis, or predictive modeling.

The upside of adopting such capabilities is that organizations have the opportunity to get a lot more out of their data than what they might from just reporting on it via dashboards. For instance, undertakings that improve forecasting, reduce customer churn, reduce fraud, or optimize production capacity all can have meaningful impact on a firm’s bottom-line.

In this case, the Holy Grail is figuring out how to give regular business analysts the ability to perform analysis that was traditionally reserved for specialized teams of data scientists. Just as desktop data-discovery tools and self-service BI helped to democratize access to data throughout the company, vendors are hoping to extend advanced analytics to a broader population of business users. The result has been an arms race as vendors have gone on a frenzy to build, buy, or partner their way into the conversation.

The constraint for many organizations has been the price tag. Even if the explicit cost of an advanced analytics solution is low through integration with something like the open source R language, the investment of time and personnel resources is often still significant. Simply put, advanced analytics expertise is expensive. Providing advanced analytics capabilities to business analysts provides value in two directions. For companies without prior investments in advanced analytics, they can now attainably perform basic forecasting and modeling that they otherwise could not. For companies already with investments and teams of experts, it means that lower-level and less-complex requests can be pushed down to business analysts, while data scientists are freed up to take on more complex challenges.

Business decision-makers evaluating their next BI and analytics investment should consider to what extent more advanced analytics capabilities are built in. Mega Vendors such as IBM, SAP, and Microsoft have responded by releasing freemium offerings that allow business analysts an accessible means to try their hand at these capabilities.

To this end, IBM Watson Analytics has taken an impressive leap in integrating forecasting into a visual interface that masks the complexity of the underlying SPSS functionality that is required to perform the analysis.  From a user experience perspective, Microsoft’s release of PowerBI takes a similar approach in that it integrates natural language processing so that users can ask questions of their data in a code-free environment. The cloud-based model and low cost also further extends the accessibility to business analysts.

In a similar vein, SAP Lumira is working to the convergence of data discovery and advanced analytics through continued integration of its Predictive Analysis suite. After acquiring KXEN in 2013, SAP has prioritized its assimilation into its overall analytics portfolio, the end goal (and important differentiator) being that a business analyst using the Lumira interface will have access to advanced analytical functionality with enterprise grade data governance of SAP and performance of the HANA platform.

Too Big Similar Button

Coming from a different angle, there are a few emerging advanced analytics players, such as Alpine Data Labs, Alteryx, and RapidMiner that are quickly introducing code-free environments with powerful capabilities. The blending, modeling, and automation capabilities of these companies holds up even at massive scales, making them important tools for augmenting the capabilities of data scientists and normal business analysts alike.

It is important to note that what we are talking about, in many of these cases, is an extension of the business analyst’s capabilities. We are broadening what can be accomplished if we take BI to its next logical extension. I’m not suggesting that these functionalities can displace truly advanced analysis across complex time series, data mining, multivariate environments, or Big Data sets from sources like machine sensors. Rather, businesses are demanding greater functionality on top of core BI & analytics capabilities and that the vendors who deliver on this will stand to gain.

There will be inevitable pushback as more advanced functionality gets placed in the hands of users that are not formally trained data scientists. Making business decisions on shaky analysis is a dangerous proposition, and no doubt many will wonder if giving functionality to those who don’t know how to use it might be akin to “giving them enough rope to hang themselves.” This is largely a paternalistic viewpoint that will taper off in the same way that similar fears about desktop data-discovery did.  Vendors will continue to build out guided user experiences to assuage these fears, and organizations will put into place safeguards to ensure the quality of underlying analysis. Ultimately, signs point to a future where barriers to advanced data analysis will continue to be lowered, and the capabilities of the data analyst will continue to expand.

Posted in Analytics, Blog, General, General Industry, Research | Tagged , , , , | Leave a comment

Microsoft’s Power BI Will Transform Enterprise BI in 2015

Microsoft Power BIMicrosoft announced on January 27th that it is planning to make its self-service BI solution, Microsoft Power BI, available for a free preview to any United States-based user with a business email account. Microsoft also provided a preview of Power BI for iPad and is planning to create iPhone, Android, and Windows apps for further mobile BI support.

In addition, Microsoft plans to make the newest version of Power BI available as a free service when the general availability launch occurs. There will also be a Power BI Pro offering available, so that Power BI will, in effect, become a freemium service similar to existing Microsoft Skype and Yammer services. The Power BI Pro offering will increase data from 1 GB to 10 GB per user, accelerate streaming data from 10 thousand rows per hour, to 1 million rows per hour, refresh data on an hourly basis, and provide embedded data management and collaboration capabilities

To prepare for this pricing change, Microsoft also plans to change the current price point for Power BI to $9.99 per month, which represents a significant reduction from the current price points for cloud BI.

There are several key ramifications for enterprise BI that Blue Hill’s community must understand immediately:

1) Microsoft is taking a freemium approach to BI with the goal of owning the end user market. This is a strategic approach based on Microsoft’s view of the new, end-user and consumerized view of software purchase and acquisition and demonstrates Microsoft’s willingness to commoditize its own price points and products for the long-term battle of winning cloud and end-user mindshare. Microsoft has learned to execute on this freemium model from several key consumer and freemium investments over the past decade: XBox, Skype, and Yammer.

In pursuing XBox revenue, Microsoft has had to learn the consumer gaming and media market and has gained a deep understanding of demos and consumer advertising that it previously lacked. In addition, Microsoft’s acquisition of Skype has led to Microsoft’s management of a free service that has fundamentally transformed communications and actually led Microsoft to even change its enterprise communications service from “Lync” to “Skype for Business.” And, finally, Microsoft’s acquisition of Yammer was initially seen as confusing, given how Yammer directly competed against collaboration stalwart SharePoint. However, as Microsoft has continued to execute on the development of Skype and Yammer and started integration between those services and Microsoft’s traditional Lync and SharePoint services, it has become obvious that Microsoft is willing to compete against itself and to take on the challenging transformational tasks needed to compete in a cloud and mobile world.

In this regard, Microsoft is actually in a less challenging situation with Power BI in that Microsoft never fully invested in creating or buying a Business Objects, Cognos, or Hyperion BI application suite. This means that Microsoft is able to position itself for a cloud BI world without having to directly compete against its own products. At the same time, expect Microsoft to bring all of its best practices from XBox, Skype, and Yammer to support a freemium model and agile development that have led to the success of these other more consumerized products.

2) Microsoft is also planning to commoditize the mobile BI market. With the impending launches of Power BI for iPhone, Android, and Windows, it is difficult to imagine mobile BI as a premium product going forward, at least in terms of pricing. Mobile BI is already basically table stakes from an RfP perspective, but high-quality mobile BI will now be necessary even for free and freemium BI offerings. In 2010, mobile BI and high quality visualizations were key differentiators. In 2015, these are just basic BI capabilities. Companies seeking differentiation in BI will increasingly look at professional services, vertical expertise, and the ability to eliminate both implementation time and IT support to reduce the basic total cost of ownership for BI.

3) Cloud BI pricing to compare apples to apples is becoming more difficult. Although Power BI’s current and intended pricing models are fairly straightforward, one of the challenges in cloud BI is that every vendor provides a different set of resources and capabilities to support its on-demand and subscription services. As a quick example, consider how Power BI will compare against Jaspersoft, which provides BI services on an hourly basis on Amazon Web Services.

Power BI will provide its Pro service at $9.99 per month, or basically $120 per year. A variety of cloud BI services such as Birst, GoodData and InsightSquared could come in at about $1,000 per year per user for a standard out-of-the-box implementation. In contrast, Jaspersoft supports an annual instance on AWS at 55 cents per hour on an m3.medium EC2 instance, which equates to about 4 GB. This adds up to about $3,750 per year. So, is this a simple comparison?

see_related_research_button

Consider that Power BI provides a standard BI environment, but will not be customized out-of-the-box to immediately support standard data sources such as Salesforce.com. Birst and GoodData will provide various levels of integration, data structures, and vertical expertise to their implementations while a sales analytics specialist such as InsightSquared could potentially implement a new solution with Salesforce data in a matter of minutes. And Jaspersoft’s offering will be better suited for an embedded BI solution because it provides no user limits. So, even with this impending price war that Microsoft will drive, companies will still have to carefully define their BI use cases and select potential solutions carefully. However, Blue Hill expects that standard BI capabilities on a user-specific basis will become as easy to access as Skype, Yammer, an XBox game, or Facebook (another Microsoft investment).

In 2015, Microsoft will shift the fundamental BI purchase decision from “How much does it cost to get BI?” to “How much will it cost to get BI expertise that is aligned to our organization?” The answer to the former question could well become “Microsoft.” The answer to the latter question is where other vendors will need to compete.

Posted in Analytics, Blog, General, General Function, General Industry, Research | Tagged , , | 2 Comments

The Pumpkin Spice School of Big Data

Source: Pumpkin Spice Trident Layers Gum by Mike MozartIn our particular pocket of New England, the leaves are turning golden, and football is replacing baseball on the TVs. This means one thing to coffee drinkers: the re-emergence of the Pumpkin Spice Latte at Starbucks. Over the past ten years, this drink has gone from an odd cult drink to a phenomenon so large that it has earned its own hashtag on Twitter: #PSL.

At the same time, one has to wonder, “What is Pumpkin Spice?” (Other than possibly the long-lost American cousin of the Spice Girls?) Pumpkin spice doesn’t actually have pumpkin in it. And it’s far from the spiciest flavor out there. However, the concept of “pumpkin spice” insinuates the idea of something that’s handmade, traditional, and uniquely American in a way that draws people into the concept of wanting to consume it. Despite its complete lack of pumpkin and relative lack of spice, the flavor created is almost secondary to the cultish conceit that has been constructed around “Pumpkin Spice.”

Unfortunately, the hype, conceptualization, and ubiquitous phenomenon of Pumpkin Spice is matched in the enterprise world through the most overhyped phrase in tech: Big Data.  Like Pumpkin Spice, everybody wants Big Data, everybody wants to invest in Big Data tools, and everybody thinks that we are currently in a season or era of Big Data. And in the past, we’ve explained why we reluctantly think the term “Big Data” is still necessary. But when you go behind the curtain and try to figure out what Big Data is, what do you actually find?

For one thing, “Big Data” often isn’t that big. Although we talk about petabytes of data, there are practitioners that talk about “Big Data” problems that are only hundreds of megabytes. These are still very big portions of data, but these problems are manageable through traditional analytics tools.

And even when Big Data is “big,” this is still a very relative term. For instance, even when Big Data collects terabytes of data, text, and binaries, the data collected is rarely analyzed on a daily basis. In fact, we still lack the sentiment analysis, video analysis, and audio analysis needed to quickly analyze large amounts of data. And we know that data is about to grow by at least one order of magnitude, if not two, as the Internet of Things and the accompanying billions of sensors start to embed themselves into our planet.

Even outside of the Internet of Things, the entirety of the biological ecosystem represents yet another large source of data that we are just starting to tap. We are nowhere close to understanding what happens in each of our organs, much less in each cell of our bodies. To get to this level of detail for any lifeform represents additional orders of magnitude for data.

And then there’s even a higher level of truly Big Data when we track matter, molecules, and atomic behavior on a broad-based level to truly understand the nature of chemical reactions and mechanical physics. Compared to all of this, we are just starting to collect data on Planet Earth. And yet we call it Big Data.

See Related Research

So, our “Big Data” isn’t big in comparison to the amount of data that actually exists on Earth. And the types of data that we collect are still very limited in nature, since they almost always come from electronic sources, and often lack the level of detail that could legitimately recreate the environment and context of the transaction in question. And yet we are already calling it Big Data and setting ourselves up to start talking about “Bigger Data,” “Enormous Data,” and “Insanely Large Data.”

To get past the hype, we should start thinking about Big Data in terms of the scope that is actually being collected and supported. There is nothing wrong with talking about the scale of “log management data” or “sensor data” or “video data” or “DNA genome data.” For those of us who live in each of these worlds and know that log management gets measured in terabytes per day or that the human genome has 3 billion base pairs and approximately 3 million SNP (single-nucleotide polymorphism) replacements, we start talking about meaningful measurements of data again, rather than simply defaulting to the overused Big Data term.

I will say that there is one big difference between Pumpkin Spice season and Big Data Season. Around the end of the year, I can count on the end of Pumpkin Spice season. However, the imprecise cult of Big Data seems far from over; the community of tech thought leaders continues to push more and more use cases into Big Data, rather than provide clarity on what actually is “Big,” what actually constitutes “Data,” and how to actually use these tools correctly in the Era of Big Data.

In this light, Blue Hill Research promises to keep the usage of the phrase “Big Data” to a minimum. We believe there are more valuable ways to talk about data, such as:

- Our primary research in log and machine data management
- Our scheduled research in self-service topics including data quality, business intelligence, predictive analytics, and enterprise performance management
- Tracking the $3 billion spent in analytics over the past five years.
- Cognitive and neuroinspired computing

By focusing on the actual data topics that provide financial, operational, and line-of-business value, Blue Hill will do its best to minimize the extension of Big Data season.

Posted in Analytics, Blog, General Function, General Industry, Research | Tagged , , , , , , | Leave a comment