Category Archives: Analytics

Do Not Let Tech Disruption Kill Your TEM Investments!

tr1From my decades-long perch as an observer of leading edge technology (no kidding – I used to write a column called The Observatory for Internet World back in the day – hmm, in fact I may revive it) I have witnessed many technology-driven business transformations. Some of those transformations were driven by “killer apps” of course, and some emerged over a fairly lengthy period of time.

Over time? Yes – think for example of the “Year of the LAN” mantra many of us witnessed from 1990 – 1992. I believed in it so much I left Microsoft to become part of the startup team for a tech journal dubbed Network Computing (NWC as we fondly knew it) in 1990 to capture the moment. The truth of the matter, however, is that we never had a year of the LAN. Rather it sneaked up on us and one day in 1993 we all woke up to discover that sure enough, we were all LAN-enabled – it had become the “age” of networking. Interestingly, NWC’s own journey echoed that path – we floundered financially (well, we broke even anyway) from 1990-1992 but then became highly charged and immensely relevant once LANs and networking technology became pervasive and business-transforming.

I can say the same for mobile technology. I became a mobile research pioneer (along with a small handful of other brave souls) back in 2002-2003, anticipating a revolution driven by enterprise mobility. Eleven years later, in 2013-2014, enterprises finally woke up to the strategic uses of mobility and are now finally driving the age of mobility.

That bit of personal history now brings me to another technology – Telecom Expense Management (TEM) – that is finally undergoing a unique renewal, at least among some of the more savvy industry players. Now let me be quite honest…some of us – ok, I – have long thought of TEM as the green eye shades end of technology. By this I mean a sleepy cohort of accountant-types reviewing endless wireline, landline and fax expenses, telecom bills, and analysis driven in large part by offloading most of the number-crunching and report generation to TEM vendors.

A somewhat more modern era of TEM began to emerge in parallel with the emergence of the Internet and Web yet the core functionality of “green eye shades TEM” remained essentially unchanged. Yet another age of TEM began to emerge in parallel with the maturing of smartphones, tablets and cellular-equipped laptops but the core functionality of TEM remained entirely unchanged. For me it has long been the case that just as history does green eye shades TEM simply likes to repeat itself. It was safe and reliable to stay the course.

It didn’t help the pace of TEM change that, as I noted earlier, enterprise mobility took over a decade to become relevant at a large enterprise scale. Sure, we had lots of technology change but the changes were not disruptive to businesses but merely evolutionary. Mobile-driven disruption has tended to occur on the consumer side – it did after all give us BYOD. Business technology however generally moved forward incrementally rather than disruptively.

Dig a little deeper into the TEM space and it is utterly clear that TEM has undergone a very long term evolution of incremental improvements since the 1990s but it has never needed to deliver business services that required it to be disruptive in any real sense. Traditional TEM capabilities – green eye shades TEM – have continued to serve businesses well.

 

But…The TEM Times are A’Changin’ at a Supercharged Pace

Ah, but the safe harbor of incremental TEM technology improvements suddenly disappeared over 2013 and 2014. Actually and more accurately it became disrupted.

Enterprises found their way to becoming fully mobile-aware, cloud-driven infrastructure and services adoption (ITaaS and MaaS) grew at lightning speed, big data became really big, and the Internet of Things (IoT) became not only real but profoundly real. Under the covers processors and memory became many orders of magnitude faster and richer in capabilities, and newer technologies such as software defined networks (SDN, SD-WAN), in-memory databases, business intelligence/analytics and machine learning all became enterprise-ready – and deployed.

“Real time” literally became real time…in the moment, of the moment, at the moment. Even simple decision making became disruptive – and a strategic advantage.

These technologies, among others I haven’t noted, suddenly became highly disruptive in nature and began driving enterprises to rapidly adopt the technologies and adapt to the fast-paced technology changes taking place. Enterprises that are seeking to embrace today’s new technologies – and in particular those companies that recognize that today’s technology disruption cycle makes it vital for them to do so – are now declaring “green eye shades TEM” as inadequate for meeting the needs of today’s transformative business ecosystems.

The TEM market in turn suddenly found itself in need of stepping up and greatly broadening its own capabilities, especially in the wake of realizing that there is now  a wealth of new opportunities to extend its services beyond core green eye shades TEM to managed mobility and IT Expense Management (ITEM). The industry’s key association, TEMIA, is itself in the process of defining ITEM and the significant shift it entails for businesses.

History is actually beginning to change for TEM instead of repeating itself. Blue Hill Research has noted these emerging opportunities for TEM vendors – which now includes the need to seamlessly monitor and manage recurring telecom, IT and mobility expenses, including the emergence of IoT expense management.

Our research team has taken a strong stab at looking underneath the covers of what it takes to transform from TEM to ITEM – check out “Applying TEM Best Practices to Optimize Your Cloud Investments” for the inside look on this.It provides a great blueprint to assemble the right enterprise strategy to ensure both your TEM and cloud platforms are fine-tuned for both your present and future needs.

We’ve also been investigating which TEM vendors are best positioned to take advantage of this wave of technology disruption and emerging opportunities for their own business growth. We’ll deliver a research report in the near-future on it.

I’ll wrap this up by also elaborating slightly on the two acronyms I casually dropped earlier – ItaaS and MaaS. “IT as a Service” is a useful term to define the general underlying platforms TEM vendors are now launching to meet the challenges of transforming from TEM to ITEM vendors. In great part this is important as well because a key enterprise consideration for TEM vensor-driven ITaaS is to deploy it to optimize enterprise investments in cloud computing. I recently delivered a webinar on this topic for Calero’s Calero World 2017– check it “Utilizing TEM Best Practices to Optimize Your Cloud Investments (http://connect.calero.com/utilizing-tem-best-practices-optimize-investment).”

“Mobility as a Service” is the emerging means of describing the end to end Managed Mobility Services (MMS) solutions vendors such as Stratix are now deploying. More on this in an upcoming blog post. Stay tuned!

Posted in Analytics, Blog, Internet of Things, IT Infrastructure, Mobility, Telecom Expense Management | Tagged , | Leave a comment

June 14, 2017: Beyond Self Service: Transforming the Data-Driven Enterprise (Webinar)

Register for Beyond Self Service: Transforming the Data Driven Enterprise (Webinar) on June 14, 2017Data-driven enterprises evolve through three phases of digital transformation. At each stage, data technology innovations have helped end users get the most out of their data. But the rush to provide self-service capabilities has created challenges, leading to a less-than-constructive relationship between those who consume data and those who manage it. As they move through the three phases the data-driven enterprise enjoys accelerated operations, better management, and improved efficiencies. But getting there requires vision, technology, and a commitment to collaborative data orchestration. Enter the “System of Insights” technology model.

Join Blue Hill Research’s Toph Whitmore, principal analyst for Big Data & Analytics; Marcin Grobelny, Director of Professional Services at GoodData; and Sid Shetty, VP, Marketplace Strategy & Experience at ServiceChannel; for Beyond Self Service: Transforming the Data-Driven Enterprise.

The webinar is scheduled for Wednesday, June 14 at 2 pm ET/11 am PT. Please register for the webinar on the GoodData website.

Posted in Analytics, Events | Leave a comment

Domo Grows Up: New Enhancements Move BI Leader Up the Data Stack

At its Domopalooza 2017 event held March 22 and March 23, 2017, Domo made several significant announcements, but three in particular stood out, including Analyzer feature enhancements, a new Domo “Alert Center,” and “Mr. Roboto,” Domo’s new AI capability.

These three announcements (of many) signal a tacit shift in both Domo’s business and marketing messaging: With these improvements, the company has committed itself to moving up the data stack, selling to new enterprise personas, and adopting a DataOps approach to data-workflow management.

In this report, Blue Hill assesses those three announcements, and how they represent a business shift in the way Domo serves (and sells to) its customers.

To read the rest of this report, please fill out the download form.

RT-A0304-DomoGrowsUp-TW-p1

Posted in Analytics, Governance, Risk Management, and Compliance, Research | Tagged , , | Leave a comment

This Week in DataOps: Manifestos, Shocking Steps, and the Rise of Data Governance

This Week in DataOps

Welcome to the first edition of This Week in DataOps! (And before you ask, no, it probably won’t come out every week.) For a reference point, think of “This Week in Baseball,” only the highlights are about data-derived value maximization. (Yes, that’s the hashtag: #dataderivedvaluemaximation. Lot of competition for that trademark, I bet.)

In this roundup: Two DataOps companies step into the light, two upcoming DataOps events take the stage, and a big DataOps buy signals a big DataOps player’s commitment to data governance transparency.

In news from BHR hq city Beantown, two new startups have taken up the mantra of DataOps. Composable Analytics, based across the Charles in Cambridge, grew out of a project at MIT’s Lincoln Laboratory. Cofounders Andy Vidan and Lars Fiedler started Composable back in 2014 with the aim of delivering orchestration, automation, and analytics, all within a DataOps context. Check out Andy’s lucid manifesto “Moving Forward with DataOps.” (I’m a big fan of DataOps manifestos, by the way.) Key takeaway: Real-time data flows, analytics delivered as a service, and composability are essential to DataOps success.

Another Boston-area firm is making news in the DataOps space. (New Cambridge, Massachusetts tourism slogan: Come for the craft beer. Stay for the data workflow management.) DataKitchen is the self-described “DataOps Company,” and delivers an algorithmic platform based on data “kitchens,” where enterprise data consumers create data “recipes” spanning data access, transformation, modeling, and visualization. And cofounders Christopher Berg and Gil Benghiat will be speaking on “Seven Steps to High-velocity Data Analytics with Dataops” at this month’s Strata + Hadoop World event in San Jose. (Apparently some of the steps are “shocking!” More details on that not-at-all-clickbaity preso here.)

Speaking of upcoming events, two feature a DataOps agenda. In June, head to…yep, Cambridge, Massachusetts for the DataOps Summit, a two-day show produced by the nice folks at Composable Analytics. Day one will focus on DataOps business use case and day two examines DataOps technical innovations. Speakers include Tamr CEO Andy Palmer, MIT Lincoln Lab researcher Vijay Gadepally, Unravel Data CTO Bala Venkatrao, IBM UrbanCode Deploy product manager Laurel Dickson-Bull, and chief technologist for PWC’s Global Data & Analytics practice Ritesh Ramesh. (Maybe don’t bring up the Oscars with Ritesh.)

And in late May, head to Phoenix for Data Platforms 2017. This year’s theme is “Engineering the Future with DataOps.” The show is sponsored by O’Reilly, Qubole, Amazon Web Services, and Oracle. Featured speakers include former Obama administration “Geek in Chief” R. David Edelman, Qubole CEO Ashish Thusoo, and Facebook engineering director Ravi Murthy.

And in case you missed it:

  • Informatica acquired UK-based data governance software developer Diaku. The Diaku data governance app snaps nicely into the broader Informatica portfolio. Plus Informatica gets more tech talent and at least some greater foothold in Europe. The purchase signals Informatica’s (and, arguably, the broader data-management software space at large) commitment to DataOps-y principles of orchestration, transparency, and workflow-based collaboration.
  • Tamr just patented its data unification model! As Tamr notes, the concept of data unification may not necessarily be particularly new, but Tamr’s “comprehensive approach for integrating a large number of data sources” coupled with its machine-learning algorithms is uniquely innovative enough to merit patent protection, at least in the judgment of the nice folks at the U.S. Patent and Trademark Office.

That’s it for now. See you next week in DataOps!

Posted in Analytics, Governance, Risk Management, and Compliance, Blog | Tagged | Leave a comment

Identifying Best-in-Breed USER Applications - User Solution Enterprise Ready


Easy Button

Over the past decade, a new set of user-oriented enterprise applications has arrived to support a variety of business use cases across both horizontal use cases such as analytics and collaboration as well as departmental use cases such as finance and sales. Blue Hill notes that these User Solution Enterprise Ready (USER) applications have the following attributes:

  • Product and Service Design: Consumer-grade ease-of-use and visual clarity
  • Focused Functionality: Can be used in the first hour that an end user has access to the application
  • User Focus: Focus on the work process associated with a job role or job need
  • Time-to-value: Production-ready in less than two weeks for a defined use case
  • Virality: Easy to expand usage to multiple users
  • Scalable Performance: No concerns in supporting thousands of users on a single corporate instance

By purchasing USER-based applications that include both User Solution focus and Enterprise-Ready functionality, Blue Hill believes that enterprises can fix existing process pain points into low-risk technology solutions have a payback period of less than six months and provide ongoing value for the organization at large.

In using the acronym USER and including the word “user,” Blue Hill follows the proud “bacronym” tradition of technology that apocryphally includes the venerable email client Pine (Pine Is Not Elm).

To further explore this concept of USER applications, Blue Hill explores the concept of the USER application through the lens of six different solutions. All of these solutions have executed on all of these characteristics to some extent, but Blue Hill believes that there are specific characteristics where each of these solutions stands out. Blue Hill calls out the following six applications for their success as USER applications:

* DataRobot: a machine learning predictive modeling optimization solution
* FloQast: a cloud-based financial close solution
* IBM Watson Analytics: a predictive analytics solution with natural language inputs
* Slack: a team collaboration platform
* Tableau: an analytic discovery and business intelligence platform
* Tact: a mobile sales enablement application with verbal and natural language interaction

Design: Blue Hill still believes that design is the next great battlefield in enterprise applications, especialy for USER applications. The combination of user interface, layout, ease of use, and ease of administration needs to be an active and continuous effort on the behalf of the vendor as users want to be able to use their application immediately. In this regard, Blue Hill believes that Tact and DataRobot serve as good examples of smart application design.

Tact (which recently raised a $15 million B round) has a natural language capability that allows sales people to literally speak to their application and get answers to questions such as “what is the status of my sales opportunities?” Even as sales people are on the road or lack access to their computers, they can ask their phones or Amazon Alexas for the information they are looking for.

In contrast, DataRobot requires statistical knowledge to set up proper parameters, but allows users to quickly design a wide portfolio of potential predictive models simply by providing a set of data. These parameters are set up based on standard statistician and data scientist expectations. Although DataRobot is not focused on supporting all employees, it is well-designed for its specialized data scientist audience.

Functionality: USER applications must be focused on providing immediate outputs that can be translated into user value. For instance, DataRobot’s predictive analytics model suggestions based on data scientist logic allow users to quickly translate raw data into business-ready algorithms. By accelerating the data scientist’s ability to test the relevance of statistical models, DataRobot’s functionality leads directly to accelerating time-to-value for predictive analytics.

And in creating predictive recommendations, Blue Hill notes how IBM Watson Analytics has been valuable since its launch to quickly identify predictive recommendations and provide deeper statistical analysis associated with business data. By providing both a natural language input that allows business managers to ask questions about their data and providing back-end statistical outputs for further statistical and data scientist analysis, IBM’s Watson Analytics looks at both entry-level and advanced functionality.

User focus: Blue Hill recently analyzed FloQast through the lens of five of its customers to determine the efficacy of the solution in supporting financial close. The success of the application is supported by a company that employs accountants at every aspect of the business and created to support the accountant’s pain points. This focus drives the simplicity of deployment and the need to fit the software around the user’s processes rather than vice versa. This expectation on being a strong user solution is a core aspect of being a true USER application.

The user focus for Tact is evident not only in its functionality, but from its focus on natural language processing. In plain terms, sales people like to talk. Tact allows them to directly speak to their sales data and get answers spoken back to them rather than relying solely on a text-based interface. By focusing on the user first rather than what is easiest to support from a technology perspective, Tact stands out in the crowded sales enablement space.

Time-to-Value: To succeed as a USER application, the solution must both be easy to use and provide near-immediate value through its functionality. Both IBM’s Watson Analytics for predictive analytics and Slack for collaboration are usable within minutes of creating an account to support predictions immediately.

When Blue Hill spoke to FloQast, an accounting close solution, we found that software purchasers typically put new employees through a 15-minute training session to get them started on using FloQast to support the monthly close and have a production implementation in place one-to-two weeks after signing off on the solution.

Virality: No discussion of USER application virality can be truly explored without mentioning Tableau, the company that perfected land-and-expand in the analytics world. By providing a fantastic data discovery product at the analyst level, Tableau grew at a meteoric pace by being easy to use and then by winning enough individual users to justify an enterprise-level Server investment. By providing consumer-level loyalty and combining with an effective sales model, Tableau grew into a market leader.

Similarly, Slack has quickly expanded where many collaboration solutions have frozen or stumbled, such as Yammer, Salesforce Chatter, and Skype, by creating a faster, simpler, and freemium product that provides both immediate functionality and supports users quickly.

Scalability and Performance: Ultimately, to provide enterprise-wide value, technologies also have be trusted and available across the entire business. And in understanding this, Blue Hill looks back to Tableau. Over the past decade, Tableau has steadily increased the computing power and number of users supported to the point where Tableau now has thousands of users for its largest deployments and has launched Tableau Online and Tableau Server on public cloud to place the responsibility for computing at scale to the cloud.

Blue Hill also notes how IBM Watson Analytics has taken a cloud-based approach to open up predictive analytics to the masses. By providing a fully cloud-based solution, IBM has made predictive analytics more readily available for adoption. By doing so, performance is no longer a potential barrier to ongoing adoption.

Conclusion

Any process that takes up over one week per month or over two hours in any work day should be a viable target for technology-based optimization. As enterprises and large organizations seek software solutions to support these time-consuming tasks, Blue Hill recommends that companies start with a USER application approach. Look for software solutions that are custom-built to solve an end-user’s direct set of problems around a common enterprise challenge.

Posted in Analytics, Financial Operations, General, Blog, IT & Infrastructure | Tagged , , , , , , , | Comments Off on Identifying Best-in-Breed USER Applications - User Solution Enterprise Ready

Connected Data Delivery: Combining Data Unification and Data Preparation

Innovation at the edges of the Big Data value-chain model has, on the one side, fostered sophisticated BI delivery, and on the other, led to flexible, affordable, and scalable data storage. But in the swampy middle, data delivery continues to play process-innovation catch-up, and is typically muddied with manual data pulls, costly process repetition, error-prone reporting, and unmeasurable data integrity.

Efficient data delivery requires both data unification and data preparation capabilities. Data unification is the collection and management of data from multiple sources. Data preparation enables stakeholders to formulate data—essentially the last-mile processing of unified data sets for business analytics consumption. When the two functions are paired in a data-management workflow environment, the data-driven enterprise enjoys efficient, repeatable, silo-breaking data-delivery capability that sets up analysts and data consumers to achieve greater insights.

To read the rest of this report, please fill out the download form.

Tamr Whitepaper

Posted in Analytics, Research | Leave a comment

Blue Hill Research BI and Machine Learning Highlights, January 2017

Blue Hill Research BI and Machine Learning HighlightsTo support questions from enterprise buyers and private investors interested in the Business Intelligence market and related analytics and machine learning topics, Blue Hill provides a monthly summary of key enterprise BI and related analytics and machine learning announcements that catch our eye.

In contrast to some of the other enterprise markets that Blue Hill covers, this has been an extremely busy month for business intelligence providers both in reacting to regulatory issues and in partnering with each other. This month’s roundup features Arcadia Data, Birst, Chartio, Datasift, Domo, GoodData, IBM, Klipfolio, Logi Analytics, Looker, Microsoft, Qlik, and Tableau.

Arcadia Data

Arcadia Data Introduces First Visual Analytics Platform Native to Cloudera Enterprise

On January 11, Arcadia Data announced that it was providing visual analytics that were directly integrated with Cloudera’s platform to provide a solution for streaming historical and unstructured data within a single analytics visualization platform.

Blue Hill believes that this integration is an important step forward for providing performant and multi-sourced visualizations consistent with the variety of Big Data. Ultimately, the value of Big Data does not come solely from the integration and ingestion of massive data volumes, but from the practical capability of gaining real-time visibility to a wide variety of data types in proper context. With this announcement, Arcadia Data is a step ahead of other Big Data analytics solutions in unlocking the analytic value of enterprise Cloudera deployments.

Birst

Birst Achieves EU-U.S. Privacy Shield Certification, Upholds International Data Protection Standards

On January 17, Birst announced TRUSTe validation for meeting Privacy Shield qualifications needed to transfer personal data between the United States and the European Union.

Blue Hill is watching data transfer and privacy rulings between the United States and the European Union closely with the understanding that current governmental policy makes these standards difficult to predict. As Evan Swarztrauber of TechFreedom recently shared in Hadooponomics, Privacy Shield itself is far from perfect. The most important part of this announcement is in Birst’s rapid ability to receive validation. Blue Hill believes that these standards will continue to change rapidly due to governmental policy concerns. As these changes occur, enterprise software providers must be prepared with a flexible toolkit of data configuration, data lineage, and data transfer capabilities to meet legislative standards.

Chartio

Treasure Data And Chartio Announce Integration To Drive Live Business Intelligence

On January 10th, Treasure Data and Chartio announced a partnership to provide a “Live Business Intelligence” integration intended to ingest, unite, and visualize a wide variety of sources through a drag-and-drop interface and at high performance.

Blue Hill notes the competition occurring to be “Real-Time,” “Live,” and “Timely” with data. In this battle, Treasure Data has established itself as a strong supporter of rapid event analysis and a high performance warehouse of massive and varied data. Chartio is well-known for its ease of use and data visualization capabilities. Together, Blue Hill believes that this solution is a cost-effective partnership for organizations requiring rapid and industrial-scale analytics. Blue Hill recommends considering this partnership for companies seeking to unlock a wide variety of data sources and quickly translate into business-friendly visualizations and dashboards.

Datasift

LinkedIn and DataSift Join Forces to Provide Data-Driven Marketers with Advanced Insights

On January 18th, DataSift announced a partnership with LinkedIn and launched PYLON for LinkedIn Engagement Insights to analyze anonymized data across all of LinkedIn’s current dimensions and variables. The intent of this tool is to provide broad-based guidance based on populations of LinkedIn users rather than to track specific users.

Blue Hill analysts have tracked DataSift for years and previously recommended DataSift as a social media analytics tool at DataHive Consulting prior to our acquisition. DataSift initially positions this product as a tool to find new audience segments, test content marketing initiatives, and to effectively measure brand sentiment. Blue Hill believes that DataSift’s PYLON for LinkedIn could also be valuable for recruiting, learning and development, and product and skill adoption.

The true value of aligning social analytics to core business tasks is still in its infancy, since the focus of social media analytics has mainly been narrowly focused on marketing rather than a bigger picture analysis of customer satisfaction, product development, and ongoing talent development needs. With DataSift PYLON for LinkedIn, companies will be better positioned to quickly benchmark themselves, their partners, and their customers in context of the rest of the world.

Domo

Crimson Hexagon Expands Business Intelligence Footprint Through Domo Partnership

On January 12, Crimson Hexagon announced an integration with Domo to bring social media data into a business analytics environment. The Crimson Hexagon connector is available in the Domo Appstore as a QuickStart app that integrates social media into business workflows.

Again, the trend of social media coming into business intelligence comes into play. From Domo’s perspective, this connector provides an important source of social media data and analytics into the same area as sales, finance, and other core business intelligence reports and dashboards. Crimson Hexagon is a Boston-area success story in providing monitoring, categorization, and analysis of social media topics and accounts with a focus on brand management, marketing agencies, and non-profit organizations. Blue Hill believes that this partnership helps introduce Crimson Hexagon to new departments and verticals over time while providing Domo with a strong social media analytics partner that is well suited to account-based and psychographic business drivers.

GoodData

GoodData Adds a Second International Data Center, in Canada

On January 24, GoodData announced its second non-US-based data center, following the 2015 launch of a United Kingdom-based data center, supporting an increasingly international customer list. This data center helps provide improved support, scale, and governance for international clients.

Blue Hill believes that this diversity will be important for companies seeking to have absolute trust in their data residency, and will serve as an advantage for GoodData in maintaining performance and availability regardless of potential technical, legal, or physical risks by providing more options for a global audience.

IBM (NYSE:IBM)

IBM provides TensorFlow deep learning for PowerAI users

On January 26th, IBM announced that its PowerAI distribution, designed to support improved performance for human-like information processing and deep learning, now supports the TensorFlow framework initially designed by Google.

Blue Hill believes that this combination of IBM-optimized hardware and software and the quickly-expanding use of TensorFlow for unstructured data analysis such as text, speech, and video will be used by IBM’s Global Business Solutions consultants to create flexible solutions based on the TensorFlow framework. It is hard not to notice how this technology directly competes against existing IBM Watson APIs and services, such as the video analysis that Blue Hill recently noted.  However, this announcement is in line with the IBM approach of quickly adopting both Open Source and trending technological frameworks, regardless of where they were initially created, with the goal of supporting high-value enterprise solutions.

Klipfolio

Klipfolio Raises $12 Million (Cdn) to Further Boost Growth

On January 5th, Klipfolio announced a $12 million (CDN) Series B funding round led by OMERS Ventures to provide cloud-based dashboards to the small and medium business market. Klipfolio has consistently doubled its customer base year-over-year and now supports over 7,000 customers while supporting over 500 integrations.

Blue Hill believes that Klipfolio is well-suited to provide real-time dashboards at an affordable price and augment existing business intelligence solutions, especially to support cloud-based services that may not be fully supported by legacy BI solutions. As a solution starting at $300/yr to support five users, Blue Hill believes that Kilpfolio is an extremely low-risk and high-reward solution to support line-of-business and cloud-based dashboard requests focused on real-time data and performance.

Looker

Looker Launches Support for Amazon Athena

On January 23rd, Looker announced its support for Amazon Athena, which allows users to analyze Amazon S3-based data with SQL and to support standard structured data formats.

Blue Hill observes that this combination makes sense, given Looker’s strong focus on LookML and supporting analytics at massive scale. This move is consistent with the progress that Blue Hill noted from Looker in October 2016 when we said that Looker was “empowering data analysts to implement new business logic more quickly, getting analytics out to a broad audience more quickly through the API, and empowering data users and browsers to provide intelligent and qualitative ways to display and prioritize analytics that matter in the context of daily work.”

With support for Athena, Blue Hill believes that Looker should be considered as a preferred analytic querying and visualization solution for existing data on S3 storage without having to move into RedShift or other analytic environments.

Looker’s support for Athena brings enterprises one step closer to the Holy Grail of being able to conveniently analyze all data by translating CSV, JSON, ORC, Parquet, and other standard data on Amazon S3.

Microsoft (NASDAQ:MSFT)

Push data to Power BI streaming datasets without writing any code using Microsoft Flow

Announcing General Availability of Power BI Real-Time Streaming Datasets

In the last few days of January, Microsoft made two announcements regarding streaming data and Power BI. On January 26th, Microsoft announced the ability to push data into  streaming Power BI dataset with a codeless action. And on January 31, Microsoft added to this by announcing general availability for real-time streaming data sets on Power BI and support for Azure Stream Analytics.

When Power BI first launched in 2015, Blue Hill warned that Microsoft’s Power BI Will Transform Enterprise BI and it continues to do so by combining an extremely affordable cost with ongoing functionality improvements that expand the scope of BI.

This current focus on streaming analytics reflects the increasing need for enterprises to support log analytics, social media analytics, and the Internet of Things through standard BI and reporting tools. Power BI’s support for streaming data integration and visualization demonstrates the increasing necessity to support streaming data as a foundational BI capability.

Qlik

Qlik Delivers Advanced GeoAnalytic Offerings with Acquisition of Idevio

On January 4th, Qlik announced the acquisition of Idevio to support advanced geographic analytics above and beyond basic mapping capabilities. Idevio is available immediately as Qlik GeoAnalytics, and Qlik has announced plans to integrate with Qlik Sense in the second half of 2017.

Blue Hill’s take is that as geography and weather become increasingly important to real estate, retail business, logistics, and many other business use cases, businesses can increasingly take advantage of advanced and pinpointed spatial data. With this acquisition, Qlik moves closer to other analytics companies known for their geospatial analytics, such as Alteryx and Teradata. With this acquisition, Qlik extends its ability to provide contextualized analytic insights.

Qlik Sense Cloud Business Now Available

On January 24th, Qlik Sense was announced as a team and group solution to support analytics in the cloud. The Qlik Sense Cloud Business offering will be available at $25 per user per month and provides the Qlik visual analytics platform in the cloud.

Yes, Qlik is now truly in the cloud. Blue Hill covered Qlik’s initial launch of Qlik Sense as a self-service data discovery tool back in September 2014 as an individual data discovery tool. With this addition of group-based governance and shared work spaces, Blue Hill expects this offering to be immediately competitive both in the SMB and mid-sized enterprise spaces based both on the Qlik brand and the ability to support collaborative analytics.

Tableau (NYSE:DATA)

Tableau Names Winners of 2016 Partner Awards

On January 25th, Tableau announced that it had recognized its top technology and channel partners for 2016. Winners included Icimo, Slalom Consulting, Deloitte, Teknion, Informatica, Sirius Computer Solutions, BluePatagon, and FedResults.

Blue Hill found it interesting that Informatica was named Global Technology Partner of the Year, an award previously won by Alteryx. Although Blue Hill believes that Tableau-Alteryx will be a strong partnership for the foreseeable future, Blue Hill also believes that this award represents a shift as Tableau focuses on enterprise deployments to a greater extent and demonstrates its ability to expand both users and wallet share in its existing market base over time. Now that Tableau has quickly evolved from market upstart to market leader, the company is pursuing deals that reflect this next stage of maturity.

 

If you are interested in a deeper discussion of any of these perspectives or how they affect customer purchasing decisions, please reach out to us at research@bluehillresearch.com.

Posted in Executive Management, Analytics, Blog, IT & Infrastructure | Leave a comment

Talend Winter ’17 Release Adds Data Stewardship, Aims to “Fix” Data Lakes

The new Talend release extends the firm’s enterprise data solution reach beyond its established beachheads in MDM, data integration, and data preparation functionality. In addition to its existing offerings, the new release offers a new “Data Stewardship” app, aimed to both ensure governance and deliver new levels of data self-service in the modern enterprise. Talend also now offers Spark 2.0 compatibility.

Enterprises considering a data-management solution should now consider Talend in the context of a broader solution offering, in line with larger platform players like the SAPs, IBMs, and Informaticas of the world. And even users seeking point solutions should evaluate Talend, which—though now greater than simply the sum of its parts—offers some pretty attractive parts, as well as its traditionally low barriers to trial.

To read the rest of this report, please fill out the download form.

Microsoft Word - FINAL - APPROVED BY TALEND FOR PUBLICATION - Ta

 

Posted in Analytics, Research | Leave a comment

DataOps: The Collaborative Framework for Enterprise Data-Flow Orchestration

DataOps is an enterprise collaboration framework that aligns data-management objectives with data-consumption ideals to maximize data-derived value. DataOps “explodes” the information supply chain to create a data production line optimized for efficiency, speed, and monetization.

Borrowing from production optimization models and DevOps theory, DataOps’ successful adoption requires adherence to three key principles:

Global Enterprise Data View: Define data journeys from source to action to value delivered, and measure performance across the entire system.
Collaborative Thinking: Structure organizational behavior around the ideal data-journey model to maximize data-derived value and foster collaboration between data managers and data consumers.
Get in Front of Data: Decentralize, then empower self-service data services and analytics throughout the organization.

To read the rest of this report, please fill out the download form.

Microsoft Word - RT-A0287-DataOpsDefined-TW.docx

Posted in Analytics, Research | Leave a comment

Self-Service Big Data in the Cloud: Questioning Authority with Qubole CEO Ashish Thusoo

Ashish ThusooThis is the fourth in Blue Hill Research’s blog series “Questioning Authority with Toph Whitmore.

Ashish Thusoo is the co-founder and CEO of Big-Data-as-a-Service provider, Qubole. He and I recently talked DataOps, data disintermediation at Facebook, elastic-data pricing models, abstraction layers, and the future of Big Data infrastructure. (Hint: It’s in the cloud.)

TOPH WHITMORE: You were an engineering manager at Facebook, where you implemented a DataOps approach to infrastructure management. You left Facebook in 2011 to start up Qubole. What motivated you to move on?

ASHISH THUSOO: That topic [DataOps] is very pertinent, and is something that a lot of companies struggle with. A lot of the genesis around Qubole was based on that.

Creating these data lakes, operating these Big Data platforms and making them available, making them self-service—those are extremely difficult tasks for most companies. At Qubole, we said, you know what, the best way to do this is to use the cloud! Use the cloud to create Big Data infrastructure that is self-service and automated. Automation takes care of the operational needs around the self-service infrastructure, and the interfaces are self-service enough that a marketing analyst or business analyst or data analyst can go into that infrastructure and do some queries and such.

Qubole is heavily influenced by the experience that we had at Facebook. My cofounder and I joined Facebook in 2007. We had a data-warehousing system, and we had the data team. Analysts would talk to the data team, and the team would then go off to get vanilla data that was stored in silos, and create some summary datasets, then put those into a data warehouse, and then analysts would come in and query that data. The process was very, very slow. Essentially, the direct result of that slow process was that we pulled data, but we didn’t actually use it that much. And the analysts would just go forward with their intuition. Data delayed is basically data denied.

When we went in there, we said this is a broken model, especially for a company that is growing so quickly. We need to rethink this model, and essentially create a self-service platform, which everybody in the company can use, and make the data team support that platform instead of being between the users and the platform.

That model is essentially what we built inside Facebook. The hope and thesis was that if you’re a data analyst, data scientist, developer, or end user, you should be able to get to the data without having to call anyone for help. The infrastructure should make that access easy, and also support that access. So, if you’re writing a query, the operational model should scale enough that it will be able to give you the results in time.

TW: You built this for Facebook. How did you recreate the technology at Qubole?

AT: We were using open-source tools at Facebook. At Qubole, the vision is similar to what we achieved inside Facebook, but with Qubole, we want to achieve it for everyone out there, for every other company. Our mantra here is that if you aspire to be a data-driven company, you should use Qubole. It will help you do that. Much in the same way that that internal platform helped Facebook.

The technology stack is completely different. Facebook was all on-prem. The enabler for Qubole was the cloud. We saw people trying to create data lakes on-prem. With Hadoop, the cost of storage had gone down dramatically. But infrastructure on-prem is still very, very limited. It’s static. You put up your clusters, you put up your systems, and then—even if you put it up so that other people can use the infrastructure—there’s always this risk for the administrator that “I can’t really open this up to everyone because it’s going to be a big problem.”

With the cloud, we turned that on its head. With the cloud, you can create a new system on the fly. It’s completely elastic. With the Qubole platform, you could create these self-service interfaces for data engineers, data scientists, and data analysts.

Our mantra is that—on the cloud platform—for any of the transformations that are coming in to that interface, we create the infrastructure on the fly. We orchestrate the computing infrastructure, and for storage, the data lakes actually that are being created on the cloud are being created on the object stores, not in HDFS. They are being created in object stores in Amazon, Oracle, Azure(Microsoft), or Google clouds.

In the cloud, the object store actually decouples compute and storage. You can keep creating the data lake, you can keep putting the data in the object store, and then with a platform like Qubole, you can have an infrastructure that adapts to your compute needs.

TW: How do you differentiate Qubole from the big Hadoop players?

AT: First, we position ourselves as the cloud platform for Big Data. The big difference for the other vendors: The distro distribution mechanism works well if you are doing on-prem. But when you go to the cloud, you can actually see all of this as a Big Data service. You can do a SaaS platform, which will remove all the complexity of having to stand up infrastructure.

Qubole users come in, create a login, and they’re ready! The same infrastructure is ready. And through that SaaS service, we are processing some 575 petabytes of data every month.

Second, the open-source software distros were built in the era of datacenters, and go in the direction of a converged architecture: “Store data in HDFS, and the same machine will be used for computation.”

Cloud architecture has changed that. The diverged architecture, where the storage is in an object store and compute, is ephemeral, gives customers a flexible pricing model and an elastic data model. And we position Qubole as a cloud-agnostic, cloud data platform that offers Big Data as a service to our clients.

TW: You mentioned the pricing model. I hear concerns from enterprise data leaders about pricing penalties for data growth. Qubole’s message of agile scaling sounds great, but what do I do if I’m about to turn on a new IoT data-delivery system? Will my expenses go up as my data volumes explode?

AT: That is a common issue. And not just for IoT projects. It’s not just the data pricing—the compute can go completely haywire too. You can get a thousand machines in the cloud in a jiffy.

There are two answers. The first is auditability—the ability to give complete visibility to the administrator as to where the costs are going: Which teams are using it more? Are they using it for the right reasons? Are certain data sets being used? Are certain datasets not being used? Can those then be moved to a different archival store? Or maybe they should be not stored?

Second is the cloud pricing model we follow. Cloud adoption initially started off in mid-market, maybe typically with the startup, millennial company.

TW: And software devs.

AT: Right. The pricing model was essentially compute hours. For entry, that model is great. And Qubole offers that. But as your computation scales, as your data scales, you get a discounted price for that.

Often, people use our elastic model in the POC stage, or in the early adoption cycle. When it’s clear the extent [to which] they need infrastructure, then they go into subscription pricing, where they buy a certain amount of compute for a certain price. And that scales. Their pricing is not going to go haywire.

TW: You talked about data scientists, data engineers, data analysts using Qubole. What’s their “pain” right now? And how does Qubole alleviate that?

AT: For these personas, self-service is the big thing: “I don’t want to wait for my data, I want it now.”

In most enterprises, a data team empowers all three roles. This is the team that is the internal sponsor for the infrastructure and systems needed to power analysis. Qubole targets the data teams: Instead of being on the receiving end of the ire of the folks saying “Hey, where’s my data?” the data teams can actually say, “You know what, with the help of Qubole, we’ve created this service, this infrastructure, this Big Data platform for you.” It becomes a mechanism for driving a full-blown data transformation, much in the same way that we drove it at Facebook from 2007-2011.

All of the learnings are there, and—as a self-service option—Qubole provides these users with the right tools for what they want to do. For example, Apache Spark is very popular with data scientists, so we have a Spark offering. We support Presto, which is more in tune for a data analyst. The same data platform can also be used by developers, who might be using Hadoop or Spark for writing applications. Or an engineer who might be using Hive for data-cleansing.

For the data team, Qubole becomes very powerful as a single platform. The data team can serve each of these different personas, and the data team is able to have complete control, and full visibility into what is happening. And can drive that infrastructure on any cloud that they want.

TW: The enterprise market question: Have you seen accelerated adoption in particular verticals?

AT: Our strategy has been “follow the cloud.” Some industries, like media, retail, ecommerce, or even enterprise marketing departments, are adopting the cloud before others. But we also see growing interest from healthcare, even financial services.

From the industry perspective, we feel that industry should know how to drive this transformation. What do you need from the perspective of people, processes, and technology to achieve that?

There is a growing realization across different verticals that they have to adopt a culture of DataOps, where data is widely available. Qubole is a catalyst in driving that adoption of data across the enterprise.

TW: We share an interest in that topic! Where do you see cloud-based data services evolving in the next few years, and where does Qubole go from here?

AT: The future is bright! When we started the company in 2011, there was a question mark on whether cloud would actually be the disruptive technology it had the potential to be. That question is answered now.

Companies are moving to the cloud, partly because applications are being built there, and new data is being produced there. But also, those businesses are realizing that they need to become much more agile with respect to their IT service.

In the cloud market, AWS is by far the leader, but we are also seeing the emergence of Azure, the emergence of Oracle, of Google, and more. As that happens, it creates a great dynamic for the market, because it gives companies options. Once you start treating clouds as base-level compute and services, you need services which can be agnostic. Qubole has a very strong role to play in that.

 

Posted in Analytics, Blog | Leave a comment