Topics of Interest Archives: Big Data

New Revenue Opportunities at the Intersection of IoT and Analytics

IoTAnalyticsHere at Blue Hill, more and more of our time is spent exploring implications associated with the Internet of Things. Naturally, there has been a great deal of buzz around IoT analytics.

We’ve had the chance to speak with a number of companies who are doing some truly fantastic things with IoT. This includes everything from farms more efficiently watering their crops to cities optimizing traffic patterns – all of which speak to the potential value that analytics can bring on top of a world of connected ‘things’.

In the IoT gold rush, business leaders should not just ask themselves if there are opportunities to build top-line growth from bolstering existing revenue streams; they should also be asking how they can grow top-line performance from net-new revenue streams.

One organization I spoke with, an industrial truck manufacturer, shared insights that the rest of us should learn from. As part of their IoT initiatives, they first began collecting data on their trucks to identify common areas of complications. This eventually evolved into the ability to provide predictive maintenance on their trucks. As part of their warranty support, they now identify imminent problems and schedule service appointments before a breakdown ever happens. This represents a series of meaningful incremental improvements that not only save time and money, but improve the quality of their trucks over time.

However, the manufacturer took a step forward in realizing this service would be highly valuable to their customers as well. They now offer the ability to preemptively identify issues and schedule service appointments as a subscription service to their customers.  In short, truck buyers can purchase this value-added service on an ongoing annual basis.  The result is a better-serviced customer and an entirely new (and recurring) revenue stream.

Any producer of highly valuable capital equipment has a direct lesson to learn from this quick example. The right blend of analytics and IoT enables a new business model – “predictive maintenance as a service.” This is not just an internal business model to improve operational efficiency and maintenance, but an external and client-facing business model that can translate Big Data into Big Money.

More broadly, the lesson is around the usefulness of the vast amount of data that organizations can collect cheaply thanks to the plummeting costs of sensors and data processing power. If the data is valuable in one context, there is at least a chance that it is valuable in another as well.

Business leaders must continue to recognize the transformational shift as every company, whether in manufacturing, retail, or construction, is becoming data centric.  As our abilities to collect and process data about every aspect of our business increases exponentially, we should always be looking for opportunities to repackage the data to drive new and innovative revenue streams.

What new business models do you see from the intersection of IoT and analytics? Join the conversation on Twitter (@James_Haight) or feel free to email me directly (jhaight@bluehillresearch.com) with your thoughts.

Posted in Analytics, Blog, Internet of Things, Research | Tagged , , , | Leave a comment

Why Your Data Preparation and Blending Efforts Need a Helping Hand

Data Blender Blog PictureIn past blog posts, we talked about how data management is fundamentally changing. It’s no secret that a convergence of factors – from an explosion in data sources, innovation in analytics techniques, and a shifting decentralization of analytics away from IT – all create obstacles as businesses try to invest in the best way to get value from their data.

Individual business analysts are encountering a growing challenge as the difficulty of preparing data for analysis is expanding almost as exponentially as the data itself. Data exchange formats such as JSON and XML are becoming more popular, and present a difficult task to parse and make useful. Combined with the vast amounts of unstructured data held in Big Data environments such as Hadoop and the growing number of ‘non-traditional’ data sources like social streams or machine sensors, getting data sources into a clean format can be a momentous task.

Analyzing social media data and its impact on sales sounds great in theory, but logistically, it’s complicated. Combining data feeds from disparate sources is easier now than ever, but it doesn’t ensure that the data is ready for analysis. For instance, if time periods are measured differently in the two data sources, one set of data must be transformed so that an apples-to-apples comparison can be made. Other predicaments arise if the data set is incomplete. For example, sales data might be missing the zip code associated with a sale in 20% of the data set. This, too, takes time to clean and prepare.

This is a constant challenge, and one that is exacerbated at scale. Cleaning inconsistencies in a 500-row spreadsheet is one thing, but doing so across millions of rows of transaction logs is quite another.

A certain level of automation is required to augment the capabilities of the analyst when we are dealing with data at this scale. There is a need for software that can identify the breakpoints, easily parse complex inputs, and pick out missing or partial data (such as zip codes) and automatically fill it in with the right information. Ultimately, the market is screaming for solutions that let analysts spend less time preparing data and more time actually analyzing it.

For all of these reasons, it is no surprise that a number of vendors have come to market offering a better way to prepare data for analysis. Established players like MicroStrategy and Qlik are introducing data preparation capabilities into their products to ease the pain and allow users to stay in one interface rather than toggle between tools. Others, like IBM Watson Analytics and Microsoft Power BI, are following a similar path.

In addition, a number of standalone products are ramping up their market presence. Each offers deeply specialized solutions, and should provide a much-needed helping hand to augment data analysts’ effort.  At Blue Hill, we have identified Alteryx, Informatica Rev, Paxata, Tamr, and Trifacta as our five key standalone solutions to evaluate. (For a deeper analysis of each solution and a further look at market forces in general, be on the lookout for our upcoming research report on the subject.) These products represent a new breed of solutions that emphasize code-free environments for visually building data blending workflows. Further, the majority of these solutions leverage machine learning, textual analysis, and pattern recognition to automatically do the brunt of the dirty work.

As a forward-looking indicator to the promise of the space, venture capital firms have notably placed their bets. Most recently, Tamr announced $25.2 million in funding this week, and Alteryx landed $60 million in funding late last year. This is a validation of what data analysts already know: the need for scalable and automated data blending and preparation capabilities is gigantic.

Posted in Analytics, Blog, General, General Function, General Management | Tagged , , , | Leave a comment

Fundamental Shifts in Information Management

Hadoop Elephant Latte (Source: Yuko Honda, Flickr)As market observers, we at Blue Hill have seen some big fundamental changes in the use of technology, such as the emergence of Bring Your Own Device, the progression of cloud from suspect technology to enterprise standard, and the assumption of ubiquitous and non-stop social networking and interaction. All of these trends have led to fundamental changes in our assumptions of technology usage, and brought market shifts where traditional players ceded ground to upstarts or new market entrants.

Based on key market trends that are occurring simultaneously, Blue Hill believes that the tasks of data preparation, cleansing, augmentation, and governance are facing a similar shakeup where the choices that enterprises make will fundamentally change. This key market shift is due to five key trends:

- Formalization of Hadoop as an enterprise technology
- Proliferation of data exchange formats such as JSON and XML
- New users of data management and analytics technology
- Increased need for data quality
- Demand for best-in-breed technologies

First, Hadoop has started to make its way into enterprise data warehouses and production environments in meaningful ways. Although the hype of Big Data has existed for several years, the truth is that Hadoop was mainly limited to the largest of data stores back in 2012, and enterprise environments were spinning up Hadoop instances as proofs of concept. However, as organizations have seen the sheer volume of relevant data requested for business usage increase by an order of magnitude, between customer data, partner data, and third-party sources, Hadoop has emerged as a key technology to simply keep pace with the intense demands of the “data-driven enterprise.” This need for volume means that enterprise data strategies must include both the maintenance of existing relational databases and the growth of semi-structured and unstructured data that must be ingested, processed, and made relevant for the business user.

Second, with the rise of  APIs, data formats such as JSON and XML have become key enterprise data structures to exchange data of all shapes and sizes. As a result, Blue Hill has seen a noted increase in enterprise requests to cleanse and support JSON and other semi-structured data strings within analytic environments. Otherwise, this data remains simply as descriptive information rather than analytic data that can provide holistic and enterprise-wise insights and guidance. To support the likes of JSON and XML without simply taking a manual development approach, enterprise data management requires investment in tools that can quickly contextualize, parse, and summarize these data strings into useful data.

Third, it’s hard to ignore the dramatic success of self-service analysis products such as Tableau and the accompanying shift in user’s relationship with data.  Also, it’s important to consider the nearly $10 billion dollars recently spent to take two traditional market leaders, TIBCO and Informatica, private. Users of data management and analytics technology have spread beyond the realm of IT, and are now embedded into the core function of roles within various business groups. Traditional technology vendors must adapt to these shifts in the market by focusing on ease of use, compelling future-facing roadmaps, and customer service. With the two largest independent data management players going private, the world of information management will most likely be opened up in unpredictable ways for upstarts that are natively built to support the next generation of data needs.

Fourth, companies are finally realizing that Big Data does not obviate the need for data quality. Several years ago, there was an odd idea that Big Data could stay dirty because the volume was so large that only the “directional” guidance of the data mattered and that the statistical magic of data scientists would fix everything. As Big Data has increasingly become Enterprise Data (or just plain old data), companies now find that this is not true, and, just as with every other computing asset, garbage in is garbage out. With this realization, companies now have to figure out how to refine Big Data of the past five years from coal to diamonds by providing enterprise-grade accuracy and cleanliness. This requires the ability to cleanse data at scale, and to use data cleansing tools that can be used not just by expert developers and data scientists, but by data analysts and standard developers as well.

Finally, the demand for best-in-breed technologies is only increasing with time. One of the most important results from the “there’s an app for that” approach to enterprise mobility is the end user’s increasing demand to instantly access the right tool at the right moment. For a majority of employees, it is not satisfactory to simply provide an enterprise suite or to take a Swiss Army knife approach to a group of technologies. Instead, employees expect to switch back and forth between technologies seamlessly, and they don’t care whether their favorite technologies are provided by a single vendor or by a dozen vendors. This expectation for seamless partnership either forces legacy vendors to have a roadmap to make all of their capabilities best-of-breed, or to lose market share as companies shift to vendors that provide specific best-in-class capabilities and integrate with other top providers. This expectation is simply a straightforward result of the increasingly competitive and results-driven world that we all live in, where employees desire to be more efficient and to have a better user experience.

As these market pressures and business expectations all create greater demand for better data, Blue Hill expects that the data management industry will undergo massive disruption.  In particular, data preparation represents an inordinate percentage of the time spent on overall analysis. Analysts spend the bulk of their workload cleaning, combining, parsing, and otherwise transforming data sets into a digestible input for downstream analysis and insights. As organizations deal with an increasingly-complex data environment, the time spent on getting data ready for analysis is expanding and threatening to overwhelm existing resources. In response to these market forces, a new class of solutions have emerged that are focused on the data “wrangling” or “transformation” process. These solutions leverage machine-learning, self-service access, and visual interfaces to simplify and expedite analysts’ ability to work with data even at the largest of scales. Overall, there is an opportunity for IT orchestrators to bring this new category of tools into their arsenal of best-in-breed solutions.

See Related Research

Companies that are ready to support this oncoming tidal wave of data change will be positioned to support the future of data-driven analysis and change. Those that ignore this inexorable set of trends will end up drowning in their data, or losing control of their data as growth outstrips governance.

Posted in Blog, General, General Function, General Industry, General Management, IT & Infrastructure, Research | Tagged , , , | Leave a comment

What U.S. Agriculture and the Big Data Skills Gap Have in Common

Big Data Skills GapIt’s estimated that by 2018, the U.S. will be facing a shortage of ~1.5 million managers who are able to work with Big Data outputs, and that we will need an additional 140,000 – 190,000 more workers that are deeply adept at Big Data analytics. Researchers, pundits, and policy makers alike have sounded the alarm bells that we are facing a dire talent shortage. Indeed, many have taken to calling it the “Big Data skills gap.”

Are these numbers meaningful? Yes. Are these numbers cause for panic or concern? Absolutely not.

The “Big Data skills gap” lends itself nicely to news articles and sound bites, but it is short-sighted. Yes, pushing colleges to offer more data analytics courses or training broad swaths of young employees is helpful, but only to a certain point. What people forget is that there are two levers that we can pull to address any skills gap: education and technological innovation.

The demand for data analytics is enormous, but if we cannot get the human expertise to meet this demand, we will find a way to automate it. This is a trend that is in no way unique to just data analytics. Consider, for instance, the history of agricultural production. The U.S. Department of Agriculture tells us that in 1920 America had approximately 32 million people living on farms supporting a population of 105.7 million. In other words nearly 30% of the population lived on farms. Contrast that to today where only 2% of America’s 300+ million people live on farms, and less than 1% of the total population actually claims farming as an occupation. The story is clear: we have far fewer farmers producing far more food. The answer to meeting the country’s ballooning demand for food was technology, not manpower. 1

Expect more of the same in the realm of big data analytics. Solution providers are speeding headlong to a future where machine learning and automation are replacing tedious and/or specialized tasks currently reserved for highly-skilled employees. In short, software vendors’ aim is to place a layer of abstraction between the user experience and the underlying commands required to manipulate and analyze data.

That is to say, users can “point-and-click” and “drag-and-drop” in place of writing scripts and query language. In doing so, modern analytics solutions are smashing traditional technical barriers to adoption. Not only can more users now perform complex analytical analysis, but organizations need less expertise in-house to accomplish the same objectives.

A number of vendors are pushing the envelope forward here, and it’s coming from all sides of the data analytics equation. On the data management and preparation side, solutions like Trifacta, Paxata, Datawatch, and Informatica Rev are challenging old perceptions of what is required to cleanse huge amounts of data. On the advanced analytics end of the spectrum, RapidMiner, Alpine Data Labs, and Alteryx have code-free interfaces that extend the option of performing complex tasks to non-data scientists. (It should be noted that Alteryx is an effective data blending tool as well). The core business intelligence space has seen this trend play out to the greatest extent, something I’ve covered at length in in previous blog posts.

Hadoop environments are still notoriously opaque to those who are not highly specialized, but this is a product of the comparative infancy of the technology. Rest assured that as machine learning and user experience innovations continue, managing Hadoop will follow the same trend line.

See Related Research

Although not a pure-play data analytics company, Narrative Science gives us a fascinating glimpse into what might be possible in the future. Currently, users can turn vast tables and streams of data instantly into coherent prose. Narrative Science’s Quill takes a collection of data and turns it into a summary of the relevant highlights and trends as if a writer was hired to perform the same task (only it does so exponentially faster).

I should be clear that data scientists and data analytics skills are not becoming obsolete. Rather, where they will be applied is shifting higher and higher on the value chain. Routine and repeatable tasks will continue to be automated, while human talent will be applied to innovative and complex operations. If we go back to the US agriculture analogy, we can see how human energy has shifted from performing tasks such as plowing or planting to areas such as operating machinery or, even further up the value chain, to efforts aimed at tactically hedging against weather and market conditions. Just as in farming, when it comes to producing greater amounts of analytical yields, we will be able to continually accomplish more with less.

Yes, the skills gap shows us that demand for analytics skills has outpaced the supply of talented workers, but that is the nature of innovation. Software suppliers and soon-to-be entrepreneurs will rise to the occasion and tackle the hardest problems of human scale limitations so that we don’t have to.

 

1. A counterpoint could be that U.S. farmers have dwindled because we are now better able to import food from other countries. While this is true, the U.S. currently exports more food than it imports.

Posted in Analytics, Blog, General Function, General Industry, Research | Tagged , , | Leave a comment

The Business Analyst’s Next Frontier: Advanced Analytics

The Next Frontier in BI We have posited in prior blogs about the increasing ubiquity of features and functionality that were once differentiators in the BI and analytics space. That is to say, cloud, mobile, and self-service access to reports and dashboards has moved from the realm of unique competitive advantages to table-stakes.

While certainly there are battles yet to be won on pricing, user interface, extensibility, data management, customer service, and a host of other factors, one of the areas of most intense competition is happening around advanced analytics. The term “advanced analytics” itself can encompass a variety of undertakings from geospatial analysis, graph data analysis, or predictive modeling.

The upside of adopting such capabilities is that organizations have the opportunity to get a lot more out of their data than what they might from just reporting on it via dashboards. For instance, undertakings that improve forecasting, reduce customer churn, reduce fraud, or optimize production capacity all can have meaningful impact on a firm’s bottom-line.

In this case, the Holy Grail is figuring out how to give regular business analysts the ability to perform analysis that was traditionally reserved for specialized teams of data scientists. Just as desktop data-discovery tools and self-service BI helped to democratize access to data throughout the company, vendors are hoping to extend advanced analytics to a broader population of business users. The result has been an arms race as vendors have gone on a frenzy to build, buy, or partner their way into the conversation.

The constraint for many organizations has been the price tag. Even if the explicit cost of an advanced analytics solution is low through integration with something like the open source R language, the investment of time and personnel resources is often still significant. Simply put, advanced analytics expertise is expensive. Providing advanced analytics capabilities to business analysts provides value in two directions. For companies without prior investments in advanced analytics, they can now attainably perform basic forecasting and modeling that they otherwise could not. For companies already with investments and teams of experts, it means that lower-level and less-complex requests can be pushed down to business analysts, while data scientists are freed up to take on more complex challenges.

Business decision-makers evaluating their next BI and analytics investment should consider to what extent more advanced analytics capabilities are built in. Mega Vendors such as IBM, SAP, and Microsoft have responded by releasing freemium offerings that allow business analysts an accessible means to try their hand at these capabilities.

To this end, IBM Watson Analytics has taken an impressive leap in integrating forecasting into a visual interface that masks the complexity of the underlying SPSS functionality that is required to perform the analysis.  From a user experience perspective, Microsoft’s release of PowerBI takes a similar approach in that it integrates natural language processing so that users can ask questions of their data in a code-free environment. The cloud-based model and low cost also further extends the accessibility to business analysts.

In a similar vein, SAP Lumira is working to the convergence of data discovery and advanced analytics through continued integration of its Predictive Analysis suite. After acquiring KXEN in 2013, SAP has prioritized its assimilation into its overall analytics portfolio, the end goal (and important differentiator) being that a business analyst using the Lumira interface will have access to advanced analytical functionality with enterprise grade data governance of SAP and performance of the HANA platform.

Too Big Similar Button

Coming from a different angle, there are a few emerging advanced analytics players, such as Alpine Data Labs, Alteryx, and RapidMiner that are quickly introducing code-free environments with powerful capabilities. The blending, modeling, and automation capabilities of these companies holds up even at massive scales, making them important tools for augmenting the capabilities of data scientists and normal business analysts alike.

It is important to note that what we are talking about, in many of these cases, is an extension of the business analyst’s capabilities. We are broadening what can be accomplished if we take BI to its next logical extension. I’m not suggesting that these functionalities can displace truly advanced analysis across complex time series, data mining, multivariate environments, or Big Data sets from sources like machine sensors. Rather, businesses are demanding greater functionality on top of core BI & analytics capabilities and that the vendors who deliver on this will stand to gain.

There will be inevitable pushback as more advanced functionality gets placed in the hands of users that are not formally trained data scientists. Making business decisions on shaky analysis is a dangerous proposition, and no doubt many will wonder if giving functionality to those who don’t know how to use it might be akin to “giving them enough rope to hang themselves.” This is largely a paternalistic viewpoint that will taper off in the same way that similar fears about desktop data-discovery did.  Vendors will continue to build out guided user experiences to assuage these fears, and organizations will put into place safeguards to ensure the quality of underlying analysis. Ultimately, signs point to a future where barriers to advanced data analysis will continue to be lowered, and the capabilities of the data analyst will continue to expand.

Posted in Analytics, Blog, General, General Industry, Research | Tagged , , , , | Leave a comment

Informatica, Permira, Canada, and $5.3 Billion Dollars

OhCanadaToday, Informatica announced a definitive agreement to be acquired by a company controlled by the Canada Pension Plan Investment Board and the Permira funds for about $5.3 billion. This agreement has been hinted at for the last couple of months as rumors abounded about Informatica as a private equity target.

As the market leader in data integration and a company that Blue Hill has covered closely, the Informatica acquisition is interesting as a key event in the data integration and data management markets. Although Blue Hill is not a financial analysis firm, we believe that the potential valuation of Informatica is an important marker for the perceived value of Big Data and the Cloud. In that regard, we’re interested in how this works out.

So, what kind of a valuation does Informatica deserve in a fair and just world? First, we’ll note that Nomura Securities recently gave a downgrade to Informatica yesterday under the assumption that Informatica has met the target price based on reaching a target price of $45. However, the acquisition offer prices Informatica at about $48.75 per share. Is this justified by the current state of Informatica as a business?

We’ll start with the technology, where we actually spend our time. Over the past several years, Informatica has truly turned over a new leaf with the development of self-service enabler Rev, data security product Secure@Source to accompany data masking acquisitions, the acquisition of product information management vendor Heiler, and a significant internal investment in cloud services that has resulted in a roughly 50% growth in software subscription revenue year over year. Between these products and Informatica’s other substantial investments in its data integration and management products (adding up to a total of 17% of revenues being reinvested into R&D), Blue Hill believes that Informatica’s product investments are significant and keeping pace in a highly competitive and evolutionary Big Data world.

In contrast, Blue Hill believes that TIBCO has made a variety of smart acquisitions over the past three years including

* LogLogic for log and security intelligence
* Maporama for geographic intelligence
* Streambase for high performance event processing
* Extended Results for mobile business intelligence and
* Jaspersoft for cloud-based business intelligence

Informatica’s intended purchase is for $5.3 billion whereas TIBCO, a competitor similarly acquired by private equity, ended up being acquired for $4.3 billion despite being similarly sized. To figure out why, Blue Hill took a quick look at an apples-to-apples comparison of recent revenue.

TIBCO and INFA revenues

TIBCO and Informatica had a one month difference in measuring quarterly results as public companies, but their performance as this summer shows a starting point for comparison. Both companies saw challenges growing software revenue from 2013 to 2014 due to European currency challenges, but Informatica showed more consistent revenues across all of their categories from 2013 to 2014. The one big anomaly would seen to be in TIBCO’s rapid subscription revenue growth from 2013 to 2014, but this can largely be explained through TIBCO’s acquisition of SaaS BI provider Jaspersoft in April 2014.

See Related Research

In this head-to-head comparison, Informatica is making greater strides in moving to a subscription revenue model, growing service revenues, and maintaining existing software license revenue from year to year. Again, with the caveat that Blue Hill is a technology analyst firm and not a financial analyst firm, the numbers seem fairly clear that Informatica was executing well on its evolution to the cloud and to maintaining client subscription revenue and loyalty.

Ultimately, Blue Hill believes that the Informatica acquisition is an interesting opportunity for private investors to treat Informatica like Dell: a company free to grow and innovate without the quarter-to-quarter pressures of dealing with specific revenue targets. Given the innovation across both Informatica’s cloud delivery and product launch efforts over the past several years, Blue Hill hopes that Informatica’s new overlords see the wisdom of allowing Informatica to continue along its current path of transformation from traditional enterprise application provider to the future of cloud-based information management and integration.

Posted in Blog, Enterprise Risk Management, Executive Management, General, General Industry, IT & Infrastructure | Tagged , , | Comments Off

Winning in the New World of Predictive Analytics

WinningConventionally, the most prevalent use cases for predictive analytics have come in the form of lead scoring or customer analysis. Many organizations have been incorporating these in some capacity for a number of years. Lead scoring provides predictive algorithms that identify which accounts are most likely to yield sales opportunities, and customer analysis helps organizations optimize outreach and engagement efforts. For a deeper dive into customer analysis, take a look at Blue Hill’s past research on customer network analysis and predictive customer analytics.

The proliferation of sensors and the inevitable rise of the Internet of Things (IoT) are creating a tremendous array of new opportunities for predictive analytics to add value in new ways to the enterprise. Sensors in devices and equipment can provide tremendous new streams of information in the form of machine sensor logs and communication between devices. Capturing this information can yield thousands or millions of new data points that organizations can use to build out new predictive models.

Particularly in machinery-centric and heavily monitored operations such as manufacturing, predictive analytics based on machine sensor data can have meaningful bottom-line impact that would be wise to explore. Information gathered from sensors allow manufacturers better insights into their operations. There is an opportunity to understand machinery constraints (such as production, capacity, and quality) at a deeper level. Manufacturers can use this insight to build predictive models that optimize their efforts while maintaining or improving quality levels. Ultimately, manufacturers have an opportunity to produce higher yields at lower costs.

Integrating predictive analytics with machine sensor data also opens the door for predictive maintenance. Organizations can be alerted to likely problem areas before breakdowns occur, or in the event of breakdowns, predictive analysis can highlight likely causes and areas that led to the breakdown. Prevention and superior identification of problem sectors represent areas of significant cost savings, both in reduced downtime and in the cost of repairs.

Further, leaps in social media, machine-to-machine (M2M) communication, and the quantification of nearly every business activity are expanding potential use cases for predictive analysis. Given the changing environment and fast-expanding use cases, IT decision makers are often unsure of what role predictive analytics could or should play in their organization.

Given the burgeoning array of new opportunities to incorporate predictive analytics into operations, a host of vendors are delivering solutions specifically suited for emerging use cases. Blue Hill is conducting a series of research initiatives examining high-value use cases for the new generation of predictive analytics tools. In our research report “Extending the Business Value of Predictive Analytics,” we explore opportunities for predictive analytics, and have spotlighted a group of vendors that present compelling options for general and enterprise-wide predictive analytics in today’s marketplace. Blue Hill recommends that IT decision makers explore Dell Statistica, IBM, Revolution Analytics, RapidMiner, SAP, and SAS as their starting point when considering new predictive analytics initiatives for their organization.

In doing so, companies should not just consider traditional predictive use cases, but also consider the proliferation of new data from machine sensors as the Internet of Things begins to mature and carve out a presence in the enterprise.

For more detailed analysis of high-impact opportunities for predictive analytics in the enterprise, as well as more in-depth vendor analysis, be sure to download our research on extending the business value of predictive analytics. To understand when to use each vendor from a financial, technical, or line-of-business perspective, please contact us and Ask an Analyst at no cost.

Posted in Analytics, Blog, General Function, General Industry, Manufacturing, Research | Tagged , | Leave a comment

Qlik Acquires DataMarket

QlikDataMarketYesterday, Qlik announced the acquisition of DataMarket, a search engine built to support a market of statistical data and structured data sources. It is especially known for its support of governmental data sources, although it provides feeds from Eurovision, Wikipedia, and other media and information sources.

Although DataMarket and Qlik are not providing guidance on their plans at this time, this acquisition represents Qlik’s increasing focus on data. Long known as a leader in data discovery, Qlik has sought to close gaps in its data management and data sourcing capabilities both with this acquisition and with the 2012 acquisition of Expressor.

Blue Hill Research readers should be aware of the following topics as they consider the ramifications of this acquisition.

Financial Stakeholders: If you have already invested in Qlik, be aware that Qlik is now a data provider that can bring multiple different sources into your data discovery environment. This may represent an opportunity to bring data source and data visualization spend together under a single vendor and to reduce spend with standalone data providers. Expect Qlik to provide competitive prices for third-party data that your organization may use.

Technical Stakeholders: Expect data integration with Qlik to become even easier as Qlik is now able to bring in and normalize a large variety of government data sources. As Qlik and DataMarket come together, technical stakeholders will be able to worry less about data integration with structured and statistical data and focus more on the challenges of semi-structured and unstructured data.

Line-of-Business Stakeholders: The Line-of-Business is about to get a new data marketplace to access a variety of governmental and commercial data sources. As Qlik continues to invest in its new market of data sources and data integration capabilities, expect Qlik to evolve into a one-stop shop to access structured data sources and quickly convert them into data visualizations.

Overall, Blue Hill Research expects that Qlik’s acquisition of DataMarket will result in the acceleration of time-to-value for structured and statistical data as Qlik continues to develop native capabilities to directly translate new data sources into self-service discovery. The end result should be either an actual data marketplace for Qlik or a gateway that will quickly translate third-party data into Qlik outputs. Either way, Qlik’s acquisition represents a strong step forward for self-service analytics with the potential to translate structured and statistical data into user-friendly visualization and discovery environments.

Posted in Analytics, Blog, General, General Function, General Industry, General Management, Research | Tagged , | Leave a comment

The Pumpkin Spice School of Big Data

Source: Pumpkin Spice Trident Layers Gum by Mike MozartIn our particular pocket of New England, the leaves are turning golden, and football is replacing baseball on the TVs. This means one thing to coffee drinkers: the re-emergence of the Pumpkin Spice Latte at Starbucks. Over the past ten years, this drink has gone from an odd cult drink to a phenomenon so large that it has earned its own hashtag on Twitter: #PSL.

At the same time, one has to wonder, “What is Pumpkin Spice?” (Other than possibly the long-lost American cousin of the Spice Girls?) Pumpkin spice doesn’t actually have pumpkin in it. And it’s far from the spiciest flavor out there. However, the concept of “pumpkin spice” insinuates the idea of something that’s handmade, traditional, and uniquely American in a way that draws people into the concept of wanting to consume it. Despite its complete lack of pumpkin and relative lack of spice, the flavor created is almost secondary to the cultish conceit that has been constructed around “Pumpkin Spice.”

Unfortunately, the hype, conceptualization, and ubiquitous phenomenon of Pumpkin Spice is matched in the enterprise world through the most overhyped phrase in tech: Big Data.  Like Pumpkin Spice, everybody wants Big Data, everybody wants to invest in Big Data tools, and everybody thinks that we are currently in a season or era of Big Data. And in the past, we’ve explained why we reluctantly think the term “Big Data” is still necessary. But when you go behind the curtain and try to figure out what Big Data is, what do you actually find?

For one thing, “Big Data” often isn’t that big. Although we talk about petabytes of data, there are practitioners that talk about “Big Data” problems that are only hundreds of megabytes. These are still very big portions of data, but these problems are manageable through traditional analytics tools.

And even when Big Data is “big,” this is still a very relative term. For instance, even when Big Data collects terabytes of data, text, and binaries, the data collected is rarely analyzed on a daily basis. In fact, we still lack the sentiment analysis, video analysis, and audio analysis needed to quickly analyze large amounts of data. And we know that data is about to grow by at least one order of magnitude, if not two, as the Internet of Things and the accompanying billions of sensors start to embed themselves into our planet.

Even outside of the Internet of Things, the entirety of the biological ecosystem represents yet another large source of data that we are just starting to tap. We are nowhere close to understanding what happens in each of our organs, much less in each cell of our bodies. To get to this level of detail for any lifeform represents additional orders of magnitude for data.

And then there’s even a higher level of truly Big Data when we track matter, molecules, and atomic behavior on a broad-based level to truly understand the nature of chemical reactions and mechanical physics. Compared to all of this, we are just starting to collect data on Planet Earth. And yet we call it Big Data.

See Related Research

So, our “Big Data” isn’t big in comparison to the amount of data that actually exists on Earth. And the types of data that we collect are still very limited in nature, since they almost always come from electronic sources, and often lack the level of detail that could legitimately recreate the environment and context of the transaction in question. And yet we are already calling it Big Data and setting ourselves up to start talking about “Bigger Data,” “Enormous Data,” and “Insanely Large Data.”

To get past the hype, we should start thinking about Big Data in terms of the scope that is actually being collected and supported. There is nothing wrong with talking about the scale of “log management data” or “sensor data” or “video data” or “DNA genome data.” For those of us who live in each of these worlds and know that log management gets measured in terabytes per day or that the human genome has 3 billion base pairs and approximately 3 million SNP (single-nucleotide polymorphism) replacements, we start talking about meaningful measurements of data again, rather than simply defaulting to the overused Big Data term.

I will say that there is one big difference between Pumpkin Spice season and Big Data Season. Around the end of the year, I can count on the end of Pumpkin Spice season. However, the imprecise cult of Big Data seems far from over; the community of tech thought leaders continues to push more and more use cases into Big Data, rather than provide clarity on what actually is “Big,” what actually constitutes “Data,” and how to actually use these tools correctly in the Era of Big Data.

In this light, Blue Hill Research promises to keep the usage of the phrase “Big Data” to a minimum. We believe there are more valuable ways to talk about data, such as:

- Our primary research in log and machine data management
- Our scheduled research in self-service topics including data quality, business intelligence, predictive analytics, and enterprise performance management
- Tracking the $3 billion spent in analytics over the past five years.
- Cognitive and neuroinspired computing

By focusing on the actual data topics that provide financial, operational, and line-of-business value, Blue Hill will do its best to minimize the extension of Big Data season.

Posted in Analytics, Blog, General Function, General Industry, Research | Tagged , , , , , , | Leave a comment

IBM + Xamarin: Is It an Enterprise Mobile App Golden Age or a Bubble We're In?

XamarinIn case you weren’t aware of it, IBM’s expansive mobile group has recently formally aligned itself in a partnership with Xamarin, a very sharp company that will now provide IBM with a mobile development platform that allows C# developers to easily build native mobile apps in C# for iOS, Android and Windows Phone. The Xamarin platform includes a number of developer tools as well as a cloud-based app testing ecosystem. On the IBM side IBM’s MobileFirst platform – which includes IBM’s own Worklight mobile app development platform – will provide Xamarin-built apps with cloud and backend enterprise connectivity and data services.

The Xamarin and IBM partnership drives home for me that mobile app development in the enterprise is becoming extremely “frothy.” Though I believe that we’ve been riding the enterprise mobile app wave for several years now, mobile app and MBaaS vendors alike are making a lot of noise about 2014 and 2015 proving to be the true “tipping point” years. For argument’s sake I will grant them this point. That leaves me wondering, however, if we now entering a true golden age for enterprise mobile app development, or if we are instead in the process of watching a bubble emerge that may be nearing its bursting point.

I will come back to Xamarin, IBM, and the question of an enterprise app development platform bubble. But first, a few more words on MBaaS platforms, which are important to Xamarin’s future success, are in order.

MBaaS Matters a Great Deal

Last week, I spent some time thinking on MBaaS (Mobile Backend as a Service) becoming the new enterprise mobile architecture of choice. There is one very interesting and key underlying notion about MBaaS: that its major goal is to give enterprises a great deal of freedom (or a liberation from the shackles of enterprise IT infrastructure, to put a bit of a literary feel to it) to focus their time and efforts on developing rich mobile solutions that meet “business needs.”

Cloud computing and platform as a service (PaaS) capabilities that easily replace old school infrastructure are two of the critical markers that define MBaaS. There are two other markers. The first is the ability to “easily” connect with the myriad backend app and data servers and other enterprise sources (that can include occasionally looney legacy systems such as an old early 1990s VAX system) that a business may need to tap. Extensive yet also simplified backend connectivity capability truly defines MBaaS – at least that’s what I think.

I can also add to the mix here DBaaS – the emerging Database as a Service “next wave” – which startups such as Orchestrate are moving to deliver on. From the 20,000 foot POV DBaaS provides a simple set of APIs that a company can utilize to connect to numerous and diverse backend database systems. I’m going to leave DBaaS for another day, but keep it in mind nonetheless.

The final marker is the very open-ended nature of MBaaS on the mobile app development platform side of things. As important as the cloud and backend services of MBaaS are to its immediate and long-term success, it will likely be the flexibility enterprises gain relative to the development tools they can use (such as Xamarin) to actually build their mobile apps that may prove the most significant marker overall in terms of what will ultimately be the greatest driver of MBaaS mass deployments.

From here, it is just a very short leap to extending the enterprise mobile app possibilities out to both the Internet of Things and to enterprise wearable tech. 2015 will indeed be a very interesting year for enterprise mobility!

Lots of Flexibility and Choice

Before I go on, I want to make absolutely clear that there is an enormous amount of complexity that underlies MBaaS. It has been an extraordinary technical challenge that the MBaaS vendors have taken on. Making cloud-based services and complex backend access and implementation appear “easy” to the enterprise – such that enterprise IT teams can almost think of an MBaaS as a nifty mobile development black box – is an unparalleled technical achievement. By this, I mean to equate MBaaS to the emergence and total integration of LAN/WAN in the 1990s, and the Internet/Web since the late 1990s, into the very DNA and fabric of all businesses large or small.

In a few years, all enterprises will have fully integrated MBaaS into their DNA as well. I will go so far as to say that I’m highly confident the security that is part and parcel of successful MBaaS platforms will be such that even today’s on-premise bound verticals – healthcare in particular – will all eventually find themselves MBaaS-based. The demise of on-premise computing is close at hand!

What the MBaaS vendors have achieved is a pure cloud and backend technical accomplishment. But in the grand continuum of enterprise mobility we arrive now at the ultimate judge or arbiter of any mobile application and development effort – the end user (whoever that may be – workforce, partners, customers, or large scale collections of consumers).

One thing MBaaS platforms won’t be able to ensure is the final outcome on how delighted end users will be with the mobile applications that are ultimately delivered through any MBaaS platform. The technical wizardry (and occasional black magic) employed by the MBaaS vendors can only go so far…they can and will free up enterprises to focus on their business needs, but they cannot help businesses actually develop their mobile-based business solutions and apps. Of course.

What MBaaS does do is create a great deal of freedom for enterprises to pick and choose the actual app development platforms that are preferred within an organization or that an organization’s development team may have expertise in. This approach maximizes developer flexibility, and minimizes the need for developers to have to use specific and likely unfamiliar tools required by a given platform.

The reason that the MBaaS vendors focus a great deal of marketing effort on the ability to create “agile” mobile app development environments for their customers is due to this developer tool flexibility. This flexibility in turn gives organizations a great deal of opportunity to focus specifically on business needs as the basis to quickly deliver finely-tuned mobile apps. This is something I will be exploring in detail over the coming weeks and won’t take any further here. It is worth mentioning, however, that the Xamarin-IBM partnership now exists at least in good part for this very reason.

See Related Research

Are Xamarin and IBM a Good Match?

As a front-end development platform and framework, Xamarin has gained a lot of ground in a relatively short period of time. It claims that its platform – which focuses entirely on C# developers – is now used by more than 750,000 developers, some of whom come from over 100 of the Fortune 500 (including Dow Jones, Bosch, McKesson, Halliburton, Blue Cross Blue Shield and Cognizant). That is a heady number of developers, and represents more than 10 percent of the total estimated population of just over 6 million C# programmers.

The partnership with IBM gives Xamarin’s developers integrated access to IBM MobileFirst – and IBM Worklight, which provides Xamarin-built mobile apps with an entirely new suite of secure cloud-based data services (sounds like MBaaS, doesn’t it?). Xamarin and IBM now provide an SDK for Xamarin developers that can simply be embedded in their mobile apps to integrate, secure and manage apps with the IBM MobileFirst platform.

There is much more to what the two companies actually put on the table, but the implementation details aren’t important here. What is important is that Xamarin is now able to provide IBM-sourced cloud and data services capabilities for those Xamarin developers that can benefit from it.

IBM, meanwhile, adds yet another arrow to its already full mobile quiver. Xamarin integration simply provides IBM with the ability to offer its enormous collection of mobile customers additional mobile app developer flexibility, and choice in how they want to – or prefer to – build their apps. Xamarin obviously also gains IBM’s mobile endorsement through the partnership; that will clearly open many new doors for Xamarin.

So yes, it is definitely a good match.

A Bubble or a Golden Age?

The answer to the question I’ve posed depends entirely on whether or not I’m right about how MBaaS is going to play out. If MBaaS does indeed emerge as technology that becomes part of overall business DNA (again, as LAN/WAN and the Internet/Web have become), then it makes a great deal of sense to have substantial app development flexibility and development platform and framework choice.

If MBaaS deployment runs into roadblocks, and if other cloud service options that limit developer choice emerge and become dominant instead, then the current proliferation of MBaaS and app development platforms (along with all the startups in the space) will indeed look like an unsustainable bubble.

That won’t happen, though – I like to think I’m right about MBaaS.

Enterprises really do face a tremendous need to get great mobile apps out the door – there is enormous enterprise demand now being generated for MBaaS and developer flexibility and choice because of this. Assuming that businesses take their strictly business-side mobile homework seriously, the infrastructure and development tools will be there to get high quality mobile apps out the door.

Red Hat/FeedHenry, Kinvey, Pivotal/Xtreme, Appcelerator, AnyPresence, Kidozen, Cloudmine, Sencha, Xamarin, Orchestrate and many other startup and established vendors (among them the usual suspects amid the giant tech companies) all stand to make a mark here. Enterprise mobility is ready to pay out on the bet.

For those of us who have been waiting since the early 2000s for such a mobile moment to become real, it is indeed looking like a golden age is finally here.

Posted in Blog, Executive Management, Finance & Accounting, General Function, General Industry, General Management, High-Tech, IT & Infrastructure, Marketing, Mobility, Research | Tagged , , | 2 Comments

Latest Blog

VMware's Industry Analyst Day Highlights Drive Toward "Consumer Simple, Enterprise Secure" Voice Continues to Dominate: Apple Releases HomePod, Broadens Siri's Reach IBM and Cisco Systems Team Up for Integrated Cybersecurity Solution, Services, and Threat Intelligence

Topics of Interest

Blog

News

BI

Big Data

Cloud

Virtualization

Emerging Tech

Social Media

Microsoft

Unified Communications

GRC

Security

Supply Chain Finance

Procure-to-Pay

Order-to-Cash

Corporate Payments

Podcast

Risk Management

Legal Tech

Data Management

Visualization

Log Data

Business Intelligence

Predictive Analytics

Cognitive Computing

Wearable Tech

Salesforce

Sales Enablement

User Experience

User Interface

Private Equity

Recurring Revenue

ILTACON

Advanced Analytics

Machine Learning

IBM

IBM Interconnect

video platform

enterprise video

design thinking

enterprise applications

Tangoe

Managed Mobility Services

Strata

Hadoop World

DataOps

service desk

innovation

knowledge

design

usability

USER Applications

ROI

Time-to-Value

AI

Questioning Authority

Domo

Yellowfin

Nexla

DataKitchen

Iguazio

Trifacta

DataRobot

Informatica

Talend

Qubole

Pentaho

Attunity

Striim

Anodot

Tableau

IoT

fog computing

legacy IT

passwords

authentication

Switchboard Software

GoodData

Data Wrangling

Data Preparation

TWIDO

Information Builders

Analytics

Enterprise Performance Management

General Industry

Human Resources

Internet of Things

Legal

Mobility

Telecom Expense Management