Living with Legacy in an Era of Innovation – A Security Story

siliconangle blog post image  Legacy is a perception of investment, and of value.  Unfortunately, legacy in the digital transformation era is seen to be a re-investment is what has been, but not what will necessarily be useful going forward.  For me, this is a false statement. For example, when the Year 2000 issue happened with systems, some firms used that opportunity to build more functionality into their systems where others just fixed the necessary bugs for the changeover.  So, one person’s legacy situation is perhaps another person’s opportunity.

But as the volume of legacy in an enterprise grows, how have we grown in our ability to leverage the investment in this legacy — or, for that matter, is it still worth the effort? Do legacy applications house a hoard of useful information and behavior — or is it a ball and chain, something you should reduce if you want to be innovative and actively working on transformation?

Legacy constraints often seem immense and burdensome — but, do they always need to be? Is object-oriented legacy software spaghetti code — or is it more like ravioli? Do agile methods embrace or reject the use of the legacy?  I am writing a series of blog posts on legacy and innovation, disproving the myth that old equals out of date and useless.

In this blog post, I will look at legacy in regards to security and streamlining of security operations. The shift to cloud and mobile has not always been graceful for organizations and has been disruptive to the way we deploy security controls. Making significant changes in authentication flow, the one security control that gates all vital access and privilege, is an enormously arduous and fragile task. The modern ‘mobile-first’ access pattern has thrown a wrench into what was an otherwise easy manageability for account security.

Not only are modern security controls challenging to adapt and apply to legacy infrastructure and interfaces, but legacy security controls tend to fall flat when it comes to modern infrastructure. How do you deploy your legacy security controls in the world of cloud and mobile when you don’t control the endpoint, network, application or infrastructure?

Authentication is often the only effective security control you have left in a modern, cloud and mobile-enabled IT environment. So you better be damn sure that authentication control is more than a simple password. But many do not.  Why is this?

I have done several authentication projects recently, and one of the main challenges I have seen is a lack of understanding of what must be protected and by whom. Too often, the focus is on cost and procedure, and not on an understanding of the dataflow and the number of endpoints involved in protecting the data. So why does the means to modern authentication seems difficult and expensive, and why do we worry so much about the impact on user experience when we never did in legacy? (wry smile).  Let’s look at why 2FA, SSO and biometrics never have caught on with many legacy houses, and why some still stick with passwords 10 years after many predicted their demise.

Two-factor authentication is becoming the norm for password security in what amounts to a reasonable concession from users to IT staff pleading with them to follow basic password security protocols. Since almost no one follows those protocols, two-factor authentication has become the stop-gap. Although passwords are bad, biometrics and other mechanisms were never considered a good replacement because they all suffered their own flaws, and could not counteract the biggest advantage passwords have going for them: They are cheap and convenient. Today we are seeing a growing movement away from explicit, one-point-in-time authentication to a recognition model that mixes implicit factors — such as geolocation, device recognition and behavioral analytics — with explicit challenges such as passwords, biometrics, OTPs [one-time passwords] and dynamic KBA [knowledge-based authentication] based on identity verification services. I just borrowed a colleague’s login to use an online application, and was denied based on geolocation and was asked for verification code from his email.   Given he is (hopefully) asleep in Canada and I am in Belgium, this stopped my progress to use the app.

Given we are throwing mobile into the mix, many firms are starting to use mobile push assuming we are glued to our mobile devices (at least the folks under 30) and can use it as an authenticator.  Mobile OTP and mobile device authenticators add some value in a 2FA approach, assuming you have not lost the device and/or are out of battery. But for security, do remember that a smartphone can still receive and display social media  or text message alerts even when the device’s screen is locked and the application that is pushing the notification is closed.

Basically, the security measures we use today reflect our risk tolerance and desire for simplicity.  This is because we assumed the hardware and systems were defended, and the endpoints were irrelevant because of strong system security.  Appropriate security depends on how valuable your data in the transaction is and what other protection is available for the data (encryption, public key infrastructure, etc).  Legacy complexity can be a good thing if the data is valuable.  But we work the data now at the endpoints, and therefore we need to find a way to block endpoint activities if necessary, using legacy technology.

Send in the Fog

fog computing graphic

We start with a basic fog question: When and where should we use fog computing in our network?

The basic premise of fog computing is decentralization of data processing as some processing and storage functions are better performed locally instead of sending data all the way from the sensor to the cloud and back again to a control mechanism. This reduces latency and can improve response times for critical system functions, saving money and time. Fog computing also strives to enable local resource pooling to make the most of what’s available at a given location.

I believe the opportunity for this kind of distributed intelligence and the associated intelligent gateways needed for fog computing are the strongest when these two conditions are met:

1. The focus of data analytics is at the aggregation level so the closer the better; and
2. There is a complex degree of protocol complexity where doing it locally actually makes more sense.

Markets that have these needs include manufacturing, extraction industries (energy, as an example), and healthcare. Applications such as smart metering can benefit from real-time analytics of aggregated data that can optimize the usage of resources such as electricity, gas, and water. Local level analytics is suited for those applications that require the data to be stored and analyzed locally due to either regulatory reasons or because the cost of transportation of the data upstream and the associated wait-time for analysis is prohibitive, such as airline maintenance data.

One major network bandwidth issue for IoT in the coming years is subsidiarity, making sure that the data analysis is done at the appropriate level to the speed and efficiency demanded for the application. In most cases, there will be a blend of approaches and the functionality to manage local as well as central application management will be increasingly critical to data analysis speed and functionality.

Use cases for fog computing and IoT

Good use cases for fog computing will be ones that require intelligence near the edge where ultra-low latency is critical. Some good case examples of fog computing usage in energy can be found in both home energy management (HEM) and microgrid-level energy management.   HEM can use IoT to transform an existing home to a smart home by integrating various functionalities such as: temperature control, efficient lighting, and management of smart devices. A microgrid is a smart distribution device that can connect and disconnect from the grid to enable it to operate in both grid-connected or standalone mode.

My own personal interest is in connected building and smarter rooms in office buildings.  Here there is a demonstrated need for edge intelligence and localized processing. A commercial building may contain thousands of sensors to measure various building operating parameters: temperature, keycard readers and parking space occupancy. Data from these sensors must be analyzed to see if actions are needed, such as triggering a fire alarm if smoke is sensed. Fog computing allows for autonomous local operations for optimized control function. This is useful for building automation, smarter cities, smarter hotels and more automated offices.

One good example of an architecture that has taken this into account can be found here with the Flextronics’ Smart Automation project.  Another good example can be seen here in the Raiffeisenbank Romania headquarters, with redundant control systems for maximum reliability.

To conclude, there is a whole sub-layer of functionality where fog computing can quickly and autonomically assess control and develop edge intelligence within the enterprise.  Our Industry 4.0 research continues to examine edge intelligence activities where central computing resources can still retain a viable role in the enterprise.

PTC ThingWorx Announces Kinex: The Next MobileFirst for iOS of the Industrial IoT

First came ThingWorx, then came Kinex, and the parallels are similar to IBM’s MobileFirst for iOS application development platform (though serving a different enterprise function). PTC is the next Industrial IoT behemoth to recognize that successful IIoT deployments require supporting applications that bring together a range of product and operations data in a single source.

On April 6, PTC announced the launch of Kinex, a suite of role-based, Industrial IoT (IIoT) applications built on the ThingWorx platform. By offering both branded and business-specific applications, the platform is similar to IBM’s MobileFirst for iOS (the success of which bodes well for PTC). PTC now provides both the IoT connectivity layer, with ThingWorx, and the application layer, with Kinex, and its expertise in Industrial IoT can aid companies in more quickly bringing application-supported IIoT innovations to market.

Kinex applications are designed to bring together data from enterprise systems and physical sensors to draw insight from and change the process of how IIoT products are designed, manufactured, serviced, and used by companies. PTC’s first branded Kinex application is Kinex Navigate, which has seen relatively quick adoption with over 125,000 seats sold. Kinex Navigate allows anyone within an organization to access up-to-date product lifecycle data pulled from multiple systems of record. Along with PTC’s Windchill system for product lifecycle management (PLM), the application enables universal data access and timely product data to drive better product decisions. PTC plans to release additional Kinex apps in the future that will allow enterprises to build on ThingWorx development capabilities to add business-specific, custom functionality.

By introducing the Kinex suite to develop Industrial IoT applications built on the ThingWorx platform, PTC aids enterprise customers in going to market quickly with new IIoT solutions and services. Customers can choose either branded apps such as Kinex Navigate, or create custom business functionality by building on top of the ThingWorx platform and Kinex applications.

How Does Kinex Hark Back to IBM MobileFirst for iOS?

This move is similar to IBM’s success in partnering with Apple to create a family of applications through the MobileFirst for iOS app development and mobile management platform. MobileFirst for iOS offers a suite of industry use case applications, including pre-built apps based on industry templates or fully customized apps. The goal of IBM and Apple’s MobileFirst for iOS partnership was to change the way employees work by integrating mobile-based process changes on the front end with IBM’s cognitive analytics on the back end.

MobileFirst for iOS is a highly successful partnership that leveraged IBM’s core capabilities in cognitive analytics. This success should bode well for PTC given its expertise in IIoT. While IBM chose to partner for the application development portion, PTC achieves greater control over Kinex applications by building them on top of the ThingWorx platform. MobileFirst for iOS expanded IBM’s presence in the enterprise by supporting custom enterprise apps with business-specific functionality. PTC aims to do the same in Industrial IoT by offering both branded and custom applications that leverage PTC’s strengths: IoT equipment and connectivity, as well as product lifecycle management (PLM) and data management systems.

Kinex is a smart move for PTC. By controlling both the IoT platform and the application, PTC gains a broader footprint in IIoT, something a number of large, global IoT players are racing to accomplish. Nearly a year out from writing about MobileFirst for iOS, I can see that the step was strategic for IBM in combining cloud, mobile, and analytics into enterprise-grade iOS apps, and thus expanding IBM’s (and its cognitive solutions’) reach in the enterprise. I expect I’ll see a similar result from PTC’s investments in IIoT with the Kinex application suite.

4TelecomHelp and Juvo Announce Integrated Platform; IoT Enters TEM Conversation in Mid-Market

IoT TEM

On April 4, 4TelecomHelp announced an all-in-one SaaS platform for TEM and WEM called 4-Titan, developed through a partnership with Juvo Technologies. The platform is built for end-to-end telecom and mobility management with a ‘Four Cornerstone’ approach that ties in inventory, contracts, operations, and expenses. 4TelecomHelp has developed and supported a number of standalone platforms over the past decade but will now be able to offer a single, integrated platform to its users. Key takeaway? This comprehensive, centralized approach is well suited to new management categories such as cloud and software licenses, and IoT devices, machinery, and sensors that TEM companies are increasingly being asked to manage. Additionally, the single platform SaaS offering will enable 4TelecomHelp to sell into larger enterprise accounts than the company has typically targeted, as mid-to-large sized companies most often favor an all-in-one managed services approach for telecom and mobility.

In our December 2016 Mid-Market TEM Landscape, Blue Hill noted that 4TelecomHelp primarily targets companies with around $500,000 per month in telecom spend, but has some accounts with as low as $100,000 in monthly spend, and a few larger and Fortune 500 clients as well. Mobile makes up 25-30% of 4TelecomHelp’s business. 4TelecomHelp does not often go head to head against other TEM companies, but will most often compete directly against telecom consulting companies due to its focus on custom engagements and project-based work. With a more comprehensive, single-platform offering through its partnership with Juvo, 4TelecomHelp will be poised to sell to more mid-to-large enterprises, as well as increase its share in mobility.

The 4-Titan platform is aimed at addressing not only current telecom and mobility needs such as Bring Your Own Device (BYOD) but also future-facing IT management such as for Internet of Things (IoT) connected devices, machinery, and sensors. For TEM vendors to successfully manage new IT categories such as IoT, and cloud and software licenses, they will need to support a platform that brings contracts, invoices, inventory, and usage data together, as 4-Titan is positioned to do. Looking forward, managing and optimizing not only telecom and mobility but also sensors, connected devices and equipment, and cloud and software licenses is where the TEM industry is headed – or at least, in my opinion, where it needs to head.

Also interesting to note is that Juvo and 4TelecomHelp met through TEMIA, the Technology Expense Management Industry Association. A few weeks back, I was in New Orleans at the semi-annual TEMIA meeting along with nearly 40 companies in the TEM and Managed Mobility Services spaces. Part of the conversation at the meeting was centered around how the term Telecom Expense Management is becoming outdated and no longer represents where the industry is headed – or, for some players, where it currently stands. To note this, TEMIA changed its name from Telecom to Technology Expense Management as TEM vendors began supporting a broader range of IT technologies, and companies focused exclusively on mobility began entering the TEM space.

I’m impressed to see a mid-market TEM vendor begin making investments to future-proof its platform for emerging technology categories such as IoT. While large, global TEM vendors are more frequently highlighting their ability to support new IT categories such as cloud and IoT, the trend for TEM vendors to manage additional IT assets and spend is clearly present in the mid-market as well. Based on the conversations I’ve had with TEM vendors and clients through my work with Blue Hill, I’d advise that mid-market TEM vendors begin investing to support new enterprise technologies and IT assets within their platforms in order to remain competitive not only with global TEM vendors but also with smaller, mid-sized, and regional players as well.

This Week in DataOps: The Tradeshow Edition

TWIDO logoDataOps wasn’t the most deafening sound at Strata + Hadoop World San Jose this year, but as data-workflow orchestration models go, the DataOps music gets louder with each event. I’ve written before about Boston-based DataOps startup Composable Analytics. But several Strata startups are starting to get attention too.

Still-in-stealth-mode-but-let’s-get-a-Strata-booth-anyway San Francisco-based startup Nexla is pitching a combined DataOps + machine-learning message. The Nexla platform enables customers to connect, move, transform, secure, and (most significantly) monitor their data streams. Nexla’s mission is to get end users deriving value from data rather than spending time working to access it. (Check out Nexla’s new DataOps industry survey.)

DataKitchen is another DataOps four-year-overnight success. The startup out of Cambridge, Massachusetts also exhibited at Strata. DataKitchen users can create, manage, replicate, and share defined data workflows under the guise of “self-service data orchestration.” The DataKitchen guys—“Head Chef” Christopher Bergh and co-founder Gil Benghiat—wore chef’s outfits and handed out logo’ed wooden mixing spoons. (Because your data workflow is a “recipe.” Get it?)

DataOps at Strata - Nexla and DataKitchen booths

DataOps in the wild — The Nexla and DataKitchen exhibition booths at Strata + Hadoop World San Jose.

Another DataOps-y theme at Strata: “Continuous Analytics.” In most common parlance, the buzzphrase suggests “BI on BI,” enabling data-workflow monitoring/management to tweak and improve, with the implied notion of consumable, always-on, probably-streaming, real-time BI. Israeli startup Iguazio preaches the continuous analytics message (as well as plenty of performance benchmarking) as part of its “Unified Data Platform” offering.

I got the chance to talk DataOps with IBM honchos Madhu Kochar and Pandit Prasad of the IBM Almaden Research Center. Kochar and Prasad are tasked with the small challenge of reinventing how enterprises derive value from their data with analytics. IBM’s recently announced Watson AI partnership with Salesforce Einstein is only the latest salvo in IBM’s efforts to deliver, manage, and shape AI in the enterprise.

Meanwhile, over in the data-prep world, the data wranglers over at Trifacta are working to “fix the data supply chain” with self-service, democratized data access. CEO Adam Wilson preached a message of business value—Trifacta’s platform shift aims to resonate with line-of-business stakeholders, and is music to the ears of a DataOps wonk like me. (And it echoes CTO Joe Hellerstein’s LOB-focused technical story from last fall.)

Many vendors are supplementing evangelism efforts with training outreach programs. DataRobot, for example, has introduced its own DataRobot University. The education initiative is intended both for enterprise training, but also for grassroots marketing, with pilot academic programs already in place at a major American university you’ve heard of but shall remain nameless, as well as the National University of Singapore and several others.

Another common theme: The curse of well-intentioned technology. Informatica’s Murthy Mathiprakasam identifies two potential (and related) data transformation pitfalls: cheap solutions for data lakes that can turn them into high-maintenance, inaccessible data swamps, and self-service solutions that can reinforce data-access bad habits, foster data silos, and limit process repeatability. (In his words, “The fragmented approach is literally creating the data swamp problem.”) Informatica’s approach: unified metadata management and machine-learning capabilities powering an integrated data lake solution. (As with so many fundamentals of data governance, the first challenge is doing the metadata-unifying. The second will be evangelizing it.)

I got the opportunity to meet with Talend customer Beachbody. Beachbody may be best known for producing the “P90” and “Insanity” exercise programs, and continues to certify its broad network of exercise professionals. What’s cool from a DataOps perspective: Beachbody uses Talend to provide transparency, auditability, and control via a visible data workflow from partner to CEO. More importantly, data delivery—at every stage of the data supply chain—is now real time. To get to that, Beachbody moved its information stores to AWS and—working with Talend—built a data lake in the cloud offering self-service capabilities. After a speedy deployment, Beachbody now enjoys faster processing and better job execution using fewer resources.

More Strata quick hits:

  • Qubole is publishing a DataOps e-book with O’Reilly. The case-study focused piece includes use-case examples from the likes of Walmart.
  • Pentaho is committed to getting its machine-learning technology into common use in the data-driven enterprise. What’s cool (to me): the ML orchestration capabilities, Pentaho’s emphasis on a “test-and-tune” deployment model.
  • Attunity offers three products using two verbs and a noun. Its Replicate solution enables real-time data integration/migration, Compose delivers a data-warehouse automation layer, but it is Attunity’s Visibility product that tells the most interesting DataOps story: It provides “BI-on-BI” operations monitoring (focused on data lakes).
  • Check out Striim’s BI-on-BI approach to streaming analytics. It couples data integration with a DataOps-ish operations-monitoring perspective on data consumption. It’s a great way to scale consumption with data volume growth. (The two i’s stand for “Integration” and “Intelligence.” Ah.)
  • Along those same lines, anomaly-detection technology innovator Anodot has grown substantially in the last six months, and promises a new way to monitor line-of-business data. Look for new product, package, and service announcements from Anodot in the next few months.

Last week I attended Domo’s annual customer funfest Domopalooza in Salt Lake City. More on Domo’s announcements coming soon, but a quick summary:

  • Focus was noticeably humble (core product has improved dramatically from four years ago, when it wasn’t so great, admitted CEO Josh James in his first keynote) and business-value-focused. (James: “We don’t talk about optimizing queries. (Puke!) We talk about optimizing your business.”)
  • There was a definite scent of DataOps in the air. CSO Niall Browne presented on Domo data governance. The Domo data governance story emphasizes transparency with control, a message that will be welcomed in IT leadership circles.
  • Domo introduced a new OEMish model called “Domo Everywhere.” It allows partners to develop custom Domo solutions, with three tiers of licensing: white label, embed, and publish.
  • Some cool core enhancements include new alert capabilities, DataOps-oriented data-lineage tracking in Domo Analyzer, and Domo “Mr. Roboto” (yes, that’s what they’re calling it) AI functionality.
  • Domo also introduced its “Business-in-a-Box” package of pre-produced dashboard elements to accelerate enterprise deployment. (One cool dataviz UI element demoed at the show: Sample charts are pre-populated with applicable data, allowing end users to view data in the context of different chart designs.)

Finally, and not at all tradeshow-related, Australian BI leader Yellowfin has just announced its semi-annual upgrade to its namesake BI solution. Yellowfin version “7.3+” comes out in May. (The “+” might be Australian for “.1”.) The news is all about extensibility, with many, many new web connectors. But most interesting (to me at least) is its JSON connector capability that enables users to establish their own data workflows. (Next step, I hope: visual-mapping of that connectivity for top-down workflow orchestration.)

Four Tips for Designing a User Interface

4-user-interfaces

Note: This blog is the third in a monthly co-authored series written by Charlotte O’Donnelly, Research Associate at Blue Hill Research, and Matt Louden, Brand Journalist at MOBI. MOBI is a mobility management platform that enables enterprises to centralize, comprehend, and control their device ecosystems.

Capable software is a powerful competitive business advantage. Without an easy-to-use interface, however, it often fails to make the lasting impact your Information Technology (IT) department expects. Whether your enterprise is designing a User Interface (UI) for the first time or making changes to a preexisting one, be sure to keep these four tips in mind:

Do Your Research

More than anything else, organizations make the mistake of implementing changes and new UI features based solely on what users want. While the intent is admirable, it’s important to remember that a product’s audience brings suggestions to the table, not solutions. User requests can be unreasonable or downright impossible to implement if they fail to understand the scope of work or technology required.

However, that doesn’t mean user feedback should be completely ignored. When properly vetted, it can be a valuable research tool. ESPN.com, for example, increased its overall revenue by 35% after selectively incorporating visitor suggestions into its website redesign.

The first step for any UI project should be conducting thorough, fact-based product management and user experience research. This uncovers the most critical user needs and gives an enterprise definitive rationale for any changes and/or feature additions to be made. Resulting from careful research at this stage, Bing.com generated an additional $80 million in annual revenue by selecting a specific shade of blue for its UI.

After initial research is conducted, protocol-based interviews, paper prototyping, and UI testing can help resolve issues before a new product release even takes place. Development team involvement in these tasks provides additional benefits, as any relevant findings and ideas are properly translated and incorporated into UI design as early as possible. In late-stage user testing, noting any common areas of confusion also ensures the effectiveness of future training efforts.

Focus on Form and Function

UI design involves two separate aspects: interface and workflows. It’s important for an enterprise to anticipate and understand how users will react to changes in both components. In today’s constantly connected digital landscape, full functionality needs to be optimized across all platforms, not just traditional desktop environments. In fact, 83% of users say a seamless experience across platforms is either somewhat or very important to UI design.

While interface changes are immediately visible and create instant, emotional reactions, workflow differences take longer for users to notice and evaluate. In both cases, be sure to sift through initial concerns for any lasting impact that could remain after adjustments are made.

Leaving project calendars clear for at least a few weeks after significant design changes are made prioritizes a product’s user experience and ensures issues can be fixed when they inevitably arise. After all, 52% of users are less likely to engage with a company after a poor user experience.

Take Risks

Fortune favors the bold when it comes to software product design, but unfortunately some companies hesitate to make changes when they’ve already experienced some level of success. Companies can be lulled into complacency, causing them to fall behind the rest of their respective markets.

Undertaking a significant UI update comes with legitimate concerns, but as technology rapidly evolves and changes, the likelihood of product stagnation increases, and its impact becomes potentially more damaging. You may need to inconvenience your user base in the short-term to bring a big payoff down the road.

Even an enterprise giant like Apple takes risks and changes its product in anticipation of future opportunity. After surveying app developers, the company realized that alienating this group would drive revenue to competing platforms and potentially harm the App Store’s future. Despite 40% revenue growth in 2016, it decided to build new analytics tools and update the store’s interface to allow developers to respond directly and publicly to customer reviews.

Remember: No Solution is Perfect

Even the most cutting-edge, revolutionary software developments are met with complaints, so expect them any time a UI is updated or changed. Users are rarely satisfied with changes right away, so remain level-headed when responding, and keep in mind that concerns don’t always indicate a widespread problem.

Few innovations are ideal for an entire user base, so decisions should be made based on evidence and research that identifies critical tasks and the most important design elements. Randomly surveying a target audience not only helps determine the validity of complaints, but provides insight into whether that group truly represents a product’s primary user base.

Before releasing any new UI feature, roll out the improved product to a small user group without notifying them of the change to seek honest impressions and reactions. After further time has passed, contact the users again to gain additional feedback and accurately gauge the success or failure of any updates.

Ultimately, no two UI design projects are created or implemented equally. Careful product and user base research are key to successfully updating or changing software. Though it can be an arduous process, the potential payoff for an organization is huge. Even industry-leading platforms can use the occasional new look.

Data on a Social Mission: Questioning Authority with Data for Democracy’s Jonathon Morgan

data-dem-logo-stacked-greyThis is the sixth in Blue Hill Research’s blog series “Questioning Authority with Toph Whitmore.”

At a time when many of us seek ways to apply our skills in the service of the greater social good, some people are actually doing it. In late 2016, Jonathon Morgan, CEO of Austin, Texas-based startup New Knowledge, created Data for Democracy, a loosely-formed coalition of data experts, analysts, engineers, project managers, scientists, and more. The organization tasks itself with the rather noble aim of solving real-world problems. Data for Democracy (or “D4D,” as the kids call it) crowd-sources its attacks on challenges like big-city traffic optimization, international refugee migration forecasting, and more. A recent contest collaboration with KDNuggets tasked entrants to devise algorithms to detect “fake news.” I spoke with Jonathon about D4D’s mission, community, and opportunity.

TOPH WHITMORE: Tell me about Data for Democracy. Who’s involved, and what kind of work are you doing?

Jonathon Morgan, Data for DemocracyJONATHON MORGAN: Data for Democracy is a volunteer collective. We’re about a thousand people right now, including data scientists, software engineers, and other technologists. There’s about a dozen active projects—some are community-led. For instance, there’s a group of folks who’ve been collecting election results data dating back to 1988 at the county level. That involves calling secretaries of state in different states around the country, collecting that data however they produce it, and going through the often-manual process of cleaning up that data and packaging it in a way that other people can use.

There are also projects where we partner with an existing organization. We’re working on a data model with the city of Boston so that we can ultimately produce an application that Boston citizens can engage with to experiment with how traffic fatalities can be reduced across the city. We’re also working with the Internal Displacement Monitoring Center (IDMC) on a project to understand the flow of refugees internally within a country based on conflict or a natural disaster. It’s a wide range of projects, which is important with a group this size. But almost everything is community-driven, community-led. Everybody’s a volunteer. We’ve been active for about three months.

TW: So Data for Democracy is composed of volunteers—What’s your mission or charter? What brings these volunteers together?

JM: The mission is broad. We are a community using data and technology to work on meaningful social impact projects, full stop. The genesis of it, there seemed to be a sense in the technology community–and in particular the data science community—that had been growing for some time, that there was a need for that community to understand and discover its civic responsibilities.

Perhaps because this latest election was fairly polarizing, I think people on both sides of the aisle want to be more engaged: they want to be participating and organizing, building community, participating in the democratic process, making sure that their voice is heard in the discussion. That typically hasn’t been a role the technology community has played. It’s a moment in which people have a lot of passion and excitement and enthusiasm for this type of engagement, so we wanted to make a space where people could gather, organize, and meet others who were feeling the same sense of responsibility, and find worthwhile projects to dedicate their time and energy to.

TW: Do you serve a political aim? Or is Data for Democracy non-political?

JM: We don’t serve a political aim. There’s people in the D4D community from both sides of the political spectrum. We have volunteers who consider themselves Tea-Party Republicans collaborating with people who worked with Hillary for America. The thing that holds everybody together is a belief in the power of technology and data to have a positive impact on the way that our cities and ultimately our states and country are run. That’s a pretty powerful thing.

TW: Are your volunteers primarily Americans working on American projects? Or is it more international than that?

JM: We’re fairly international, though everybody’s operating in English. Our volunteers skew toward the U.S., Canada, U.K., Australia but there’s also more Europeans interested in working on projects that have more of an international focus. I mentioned the large group that’s working on understanding the flow of refugees inside of countries: It’s a fairly specific humanitarian objective, and the volunteers are partnering with an organization called VINVC which focuses on this kind of internal migration. It’s probably 80/20, with 80% of the community in the U.S., but even on U.S.-specific projects like the one with the city of Boston, the intention is to take the model and process and adapt them to the transportation and mobility data available around the country, and ultimately around the world.

TW: What skills do the volunteers bring to these projects?

JM: A wide variety, under the larger umbrella of data science and technology. There’s folks with data-engineering backgrounds, machine-learning, statistics, software engineering, infrastructure operations. There are people that make the plumbing of all of our software and data applications work, there are communications folks who focus on the story-telling, people who focus on data visualizations, even a few folks who are more product and project managers—They tend to be good organizers for the projects and for the general community.

With a community this large, we have to think deliberately about the mechanics of the community: how you join, how to hook you up with the right projects, how to make sure you don’t feel lost. With projects like this, it’s a little bit like wandering into a big city and trying to figure out where to stay for the night. It can be a little bit daunting if nobody is there to grab your hand. It’s somewhere in-between an open-source software project and an academic research project, like those two worlds coming together.

TW: You beat me to the open-source community analogy. Walk me through the project management model. How do the projects get determined? Who leads them?

JM: The projects come from two places. First, someone in the community will have an idea—something that would be interesting to work on. We have a space in the community for those sorts of conversations, and if a handful of people are also excited about that idea, then they run off and do it.

Second, somebody from outside the community might have an idea, hear that we exist and then approach us for some collaborators and executing on the idea. Our work with the Internal Displacement Monitoring Center is a good example: The IDMC has obviously deep expertise understanding immigration law, but its members are not technologists and data scientist, nevertheless they have important data needs that we can help with.

So far, every project has started with a small core of people—one, two, or three—who have expressed passion for delivering it, and have time and energy to devote to it. I tap them on the shoulder and say “Hey it looks like you’re excited about this, how about you assume the responsibility for leading it, organizing it, setting deliverables, making sure that people understand what this project is about and how to get involved with it.”

So far, that model is working. There aren’t a lot of good working models for collaborative research, like there is for collaborative software development, for example. The people who end up being project leads are so essential to this process: They document objectives, needed skills, the sorts of people who can add value, and the specific, bite-size tasks they can engage in to give something back to the project.

TW: How many projects are you working on? What does delivery look like?

JM: Right now, there’s about a dozen active projects with multiple delivery points. In a sense, there’s no such thing as done. In the election transparency project, the first deliverable was to document county-level elections results back to 1988 for all of the counties in the United States. That was a big marker. Once the volunteers produced that data set, they published it via a partner platform, Data.World. They made that data available to the public. That’s a big deliverable, but it’s just step one.

Next there’s the modeling process to understand what economic or socioeconomic factors might have caused certain counties to flip in any given election year, and what the underling mechanics of that might be. That requires a lot of statistics. The team is close to having models that explain at least some of that phenomenon. The deliverable after that will be reports or in our case, blog posts, where we communicate findings and implications of those findings. Along the way, we’re generating artifacts that can be used by other data scientists and software engineers. Everything we work on we publish as open-source projects.

TW: How is Data for Democracy funded?

JM: We don’t have corporate sponsors. A handful of technology providers have offered their products to the community to use for free. Data.World is a data publishing and collaboration platform—many of their staff are community members and have supported projects in addition to offering use of their platform. Eventador is a streaming data platform that’s been helpful in data acquisition and processing. Mode Analytics is an analytics and dashboard platform that we’ve been using data exploration and visualization. And Domino Data Lab is a collaborative research platform which we’ve been utilizing as well.

TW: How can someone reading this get involved with Data for Democracy?

JM: For an individual, just let us know. We have a couple steps to get you into the community, understand where you might want to contribute, where your skills might sync up with active projects.

For an organization, it’s the same process, but we’ll talk about how Data for Democracy can be useful to the organization. The city of Boston had a very clear idea—we’re working with their data and analytics team, so they had a specific project idea that was appropriate for data science and technology. Then we can frame a project and offer it to the community to see who’s interested in working on it.

TW: Any other projects to highlight?

JM: We’re sponsoring a data visualization project with KDNuggets. The goal is to debunk a false statement using data visualizations as a story-telling tool. It’s a nice way to counter the rhetoric we heard over the course of the last election. People say we’re in this post-factual environment—As data scientist, we have a real responsibility to right that ship. It’s an interesting idea for a contest in trying to get people to think about how they can clearly communicate a fact so that it’s interpretable and that it makes sense and that it’s sticky.

TW: Data for Democracy just hit a thousand volunteers. How important is that milestone?

JM: It signals that this is an important movement for the technology community. This isn’t just a response to the election, this is something that the community needs. This sense of civic engagement and responsibility is a real thing. This is a foundational shift in the way technologists see themselves.

TW: Where do you go from here? What comes next?

JM: There’s always more work to be done. It means making sure that we’re collaborating with partners that can use this kind of help in furthering their mission. When we have the data sets that we’re creating and the models that we’re producing, we’re making sure that we communicate that to the outside world in the broader community…that we’re participating in the national discussion about the kind of discourse that we want our country to have. It means continuing to improve our community so it’s easy for people to get involved, there’s always something for them to do, and that we’re making it a place that’s welcoming and positive and accepting and full of energy, which is what it is right now.

Blue Hill Finds Managed Mobility Services Deliver a Three-Year ROI of 184%

In most areas of business, inaction can be just as impactful as action. Enterprises typically view the cost of not acting as an opportunity cost, or, at most, an indirect bottom line impact. But for enterprise mobility, not acting has both direct and indirect cost implications from lost financial, technical, operational, and strategic value. In fact, the direct monetary cost of not acting is actually higher than the cost of Managed Mobility Services. How can that be? Let’s dig into the numbers…

By Blue Hill estimates, unmanaged direct mobility costs can be 20% overweight compared to a managed environment. For the average billion-dollar revenue company – with $5 – $10 million in telecom/mobility spend – expense management alone can be a million-dollar savings opportunity.

Unmanaged environments generate significant costs from fees (such as late fees or overage charges), as well as service order placement and support. Apart from monetary costs, not acting also presents opportunity costs from lost productivity and technical debt. Potential revenue-generating activities are re-allocated to overhead or administrative tasks, and device downtime is frequent and lengthy. Finally, the enterprise does not have a coordinated, long-term mobility strategy in place, and thus faces strategic costs.

The cost of not acting can be substantial. But what if the enterprise does act, using in-house resources to match the capabilities provided by a third-party MMS vendor? Typically, it will spend more, and receive a lower level of service than an MMS vendor can provide. Between helpdesk, email, security, and invoice management, enterprises devote two-to-three full-time equivalents for every 1,000 devices based on Blue Hill discussions with enterprises. Based on an average annual salary of $63,000 for an entry level telecom engineer, and a 1.3 multiplier for the fully loaded cost of an employee, this results in a labor cost of approximately $164,000 – $246,000 per year.

Cost of Not Acting vs MMS

Direct mobile costs, fees, and service order costs can be reduced somewhat, compared to an unmanaged environment, but at the expense of using IT resources for low-value, high-effort tasks such as resolving carrier disputes or sorting through bills – tasks that are not a core use of IT resources. Finally, in-house support costs are significantly higher than the support costs for Managed Services, as support is often bundled into the MMS contract. To achieve the same level of service in-house would require significant internal support resources. While enterprises may be able to re-create the capabilities of an MMS vendor in-house, they will typically do so at a much higher monetary, opportunity, and employee cost compared to a dedicated MMS vendor.

Managed Mobility Services comes out ahead. Overall, Blue Hill estimates that the direct device and data costs of a standard enterprise can typically be driven down to less than $60 per device per month through a coordinated, well-managed, and effectively-sourced approach. But cost savings are not the most impressive part – more impressive is the return on investment generated over a three-year period by Managed Mobility Services.

Based on conservative Blue Hill estimates, the cost reduction from IT resources and carrier expenses alone can result in a 3-year ROI of 184% based on an assumption of 20% carrier savings in the first year that reduces as the environment is optimized, as well as 50% IT overhead savings. Blue Hill has seen carrier savings in excess of 40% and an elimination of direct in-house IT mobility support, which would increase this ROI substantially. Blue Hill notes that, even with conservative estimates, Managed Mobility Services can provide higher levels of service compared to in-house management, and a three-year ROI ranging from 150% – 450% by reducing monetary, opportunity, and employee costs for the enterprise.

ROI for MMS

There is a clear cost of not acting for Managed Mobility Services. Blue Hill describes this problem in greater detail in our recent report, aptly titled, The Cost of Not Acting for Managed Mobility Services. Blue Hill did the math: Managed Mobility Services provide a higher level of service than in-house or unmanaged environments, while delivering a significant three-year return on investment.

4 Key Paths to Enterprise Mobility: How Your Enterprise Could Save a Million Dollars

PathstoMMS

Ask a line-of-business or IT manager to list, one-by-one, each step that her enterprise took to end up at its current mobility strategy, and she likely won’t be able to tell you. Business leaders aim to take steps that lead logically toward a well-defined, long-term vision, but for enterprise mobility, adoption rarely follows this path. Often, an enterprise will invest in mobility or adopt managed services in stages, in an inefficient, and potentially costly strategy that delays the enterprise’s ability to unlock the strategic value of mobility and to achieve digital transformation. There are many roads to Managed Mobility Services, but taking the one less traveled – adopting a full suite, managed services contract initially – might make all the difference.

Businesses were slow to predict the impact that mobile would have on their workforce and on their business operations overall, and thus few enterprises put a long-term, cross-departmental mobility plan in place before beginning to invest in mobility. This left many with mobile environments that support multiple carriers, device types, applications, and departmental policies without a coordinated, organization-wide approach that spans purchasing, logistics, implementation, kitting, replacement, bill pay, and so forth.

Though this piecemeal approach is sub-optimal, once in place – short of a significant business case being made or a major catalyst event forcing the enterprise to act – it is likely to remain out of simplicity, and to avoid the need to address and prioritize the various stakeholder interests involved in enterprise mobility.

Generally, expense management presents the clearest business case for an enterprise to pursue managed services, due to the visibility of expenses in the enterprise. Thus, an expense management contract will often be adopted first. By Blue Hill estimates, unmanaged direct mobility costs can be 20% overweight compared to a managed environment. For the average billion-dollar revenue company, telecom and mobility spend averages $5-10 million per year, making this a million-dollar savings opportunity.

After businesses have made the case for Telecom Expense Management (TEM) solutions, they will often pursue additional managed services to achieve greater cost savings and efficiency gains. Blue Hill documented the costs and benefits of various mobility strategies in our recent report, The Cost of Not Acting for Managed Mobility Services.

The timeframe for adopting components of MMS varies, but most enterprises generally seek to support financial, technical, logistical, and/or strategic needs through managed services. Successful Managed Mobility Services support some or all of the following components of enterprise mobility: 

Financial: contracts, invoice management, payments, data consumption/roaming, dispute management

Technical: kitting, staging, content, data, identity, security, apps

Logistical: sourcing, device fulfillment, device repair/replace, device replenishment

Strategic: mobile business assessment, health and security check

Enterprise mobility needs become more complex over time, and rise up the enterprise hierarchy from the basic ability to use mobility, to security and governance, to the widespread adoption of mobility, and then finally to more strategic or transformative uses of mobility. Enterprises can achieve these high-level hierarchical needs through multiple managed services contracts that the enterprise has invested in over time. However, the greatest strategic and transformative value for managed services is achieved when an enterprise pursues a full-suite Managed Mobility Services contract initially, giving the vendor visibility into all areas of the enterprise’s mobility environment: expenses, operations, logistics, and even applications and security settings at the device level.

Utilizing a single vendor for managed services creates synergies in financial, operational, technical, and strategic value by placing all responsibility with a vendor that acts as a single point of contact for all enterprise mobility needs. By managing all aspects of an enterprise’s mobility strategy, the MMS vendor can seek cost savings and efficiency gains throughout the entire mobility lifecycle, with a greater understanding of how to optimize the environment from a financial, operational, technical, and strategic standpoint.

For enterprises with an existing TEM contract: it’s not too late! Blue Hill recommends that these enterprises pursue opportunities for additional managed services with their existing vendor relationship, or look to outside vendors if their TEM vendor does not also support Managed Mobility Services. For enterprises with an unmanaged mobility environment, Blue Hill recommends considering a single vendor for all Managed Mobility Services to achieve the greatest potential strategic value from the relationship.

What Excites Me About Sales Enablement

Anyone who has met me observes two things: I have pink hair and I am passionate about aligning B2B sales and marketing.  Enterprise buyers today shop for solutions the same way they buy TVs and jeans.  And that’s forcing a change in the way enterprise sales and marketing communicate with the buyer, and the tools we use to support that.

In the past couple of years, there has been an explosion of new solutions in what is being called Sales Enablement.  But what is “enablement” exactly?  It’s not in the Merriam-Webster dictionary.  So we have a challenge: a new industry that’s busting at the seams, ripe with confusion and noise, defined by a term that’s not a real word.  That’s exactly why I joined Blue Hill Research.  I want to help create some meaning and structure out of this chaos.

I’ve been watching the advancing changes in sales and marketing since 2009, as LinkedIn was just getting traction with 50 million users (today they have over 400 million) and we all started to get a lot more connected with our buyers via mobile and social.  At the time, I was managing an inside sales team and we were noticing that our buyers were behaving differently, that they were spending time in forums and social media discussing best practices and getting input from one another on what solutions to consider.  We began experimenting with new approaches including social media outreach and listening, account-focused research and targeting, and leveraging specific content across the sales cycle. With these new approaches and experimentation, we cut our sales cycle by more than half, beating the industry average overall by a considerable amount.  We learned that we could differentiate by selling differently.

Yet, we struggled with the tools we had to work with, and providing metrics and reporting to show management what we were doing was nearly impossible. That experience got me hooked.  I wanted to figure what was going on with buyers and how I could learn more and improve on that experience in the sales and marketing functions. Recently, I’ve noticed a growing number of tools, in addition to the original CRM systems, designed to help sales people more easily, quickly, and effectively engage with the buyer in their journey. A lot of them tap into the mobile and social aspect of engagement today. Automation, AI and predictive analytics are being tried out in the sales world, a place they had never been before. Sales is demanding more educational content tailored to each step in the decision process. Social Selling became a “thing”. In 2014, I joined the association for inside sales professionals (AA-ISP) and became chair for the San Diego chapter. The AA-ISP is a global organization dedicated to the profession of inside sales and sales development. At their last annual leadership meeting, the number of new vendors in the expo was astonishing.  All seem to have similar messaging.  How can one tell them apart?  How does an enterprise figure out which ones fit together best for their company?  Which are viable? Where does one start to build requirements for these new tools? How can you ensure adoption once implemented? Which ones are working, which ones aren’t? What’s the ROI on any of this?

This is why Sales Enablement excites me.  It’s a brand-new space.  It’s an industry that’s forming, learning, maturing. There is a lot of stuff to figure out. There will be consolidation.  There will be successes and failures. It’s stimulating to be witnessing the birth of a new category, one that is serving a new and evolving need.  Joining the Blue Hill Research team, I bring my experience from the other side of the table when I was part of the emerging Telecom Expense Management industry in the early 2000’s.  As a TEM vendor, I relied on groups like Blue Hill Research to help us figure out what the market wanted, honing our platform, our service delivery, and our implementation process as we matured. That experience will help me to ask the right questions and provide guidance to this maturing category of Sales Enablement.

Sales Enablement Is more than just Technology

Sales Enablement can’t just be about technology and tools; it has to start with a defined, structured process that addresses buyer and customer engagement. This is a process that touches multiple stakeholders in a company: marketing, sales, and customer success. Tools and technology support this process.  Sales Enablement is still in its early stages with new vendors emerging each month and experiencing growing pains getting their products to market and in deployment and adoption. My goal is to help vendors better understand their customers and to help buyers understand how to cut through the noise and hype so that they can successfully select the right solutions for their needs and utilize sales enablement in their environment.

For Sales Enablement Vendors

I am here to help sales enablement vendors to:

  • Craft messaging that helps them stand out
  •  Understand the evolving needs of their buyer
  • Understand who the stakeholders are for each purchase
  • Identify what key process areas they support
  •  Develop Business Case Studies that showcase results, implementation strategies, deployment challenges, and ROI
  •  Help define the ROI of their solution
  • Develop best practices in implementation
  • Benchmark how they compare to the market

For Customers Adopting Sales Enablement

I am here to help enterprises looking to improve their sales processes to:

  • Provide current market insights and updates
  •  Align internal stakeholders to access their needs
  • Define their own internal processes
  • Develop vendor requirements
  • Assess best vendor fit for their needs
  • Prepare their internal teams to ensure implementation success
  • Apply best practices to ensure uniform user adoption

Let’s Clarify the Landscape

Most vendor landscapes I’ve seen lump pretty much any company who touches the sales, marketing, or customer success together. I see everything from point solutions to complete platforms lumped together.  Some include CRM solutions. My goal will be to break this down to provide more clarity for both vendors and users.  I’ll be monitoring the trends – new entrants, mergers and acquisitions, funding, and innovations that will benefit users.

Bottom, line, my goal is to prevent “shiny rock syndrome” along with the heartache and wasted time and money that goes with it.

Latest Blog

VMware's Industry Analyst Day Highlights Drive Toward "Consumer Simple, Enterprise Secure" Voice Continues to Dominate: Apple Releases HomePod, Broadens Siri's Reach IBM and Cisco Systems Team Up for Integrated Cybersecurity Solution, Services, and Threat Intelligence

Topics of Interest

Advanced Analytics

AI

Analytics

Anodot

Attunity

authentication

BI

Big Data

Blog

Business Intelligence

Cloud

Cognitive Computing

Corporate Payments

Data Management

Data Preparation

Data Wrangling

DataKitchen

DataOps

DataRobot

design

design thinking

Domo

Emerging Tech

enterprise applications

Enterprise Performance Management

enterprise video

fog computing

General Industry

GoodData

GRC

Hadoop World

Human Resources

IBM

IBM Interconnect

Iguazio

ILTACON

Informatica

Information Builders

innovation

Internet of Things

IoT

knowledge

legacy IT

Legal

Legal Tech

Log Data

Machine Learning

Managed Mobility Services

Microsoft

Mobility

Nexla

Order-to-Cash

passwords

Pentaho

Podcast

Predictive Analytics

Private Equity

Procure-to-Pay

Qubole

Questioning Authority

Recurring Revenue

Risk Management

ROI

Sales Enablement

Salesforce

Security

service desk

Social Media

Strata

Striim

Supply Chain Finance

Switchboard Software

Tableau

Talend

Tangoe

Telecom Expense Management

Time-to-Value

Trifacta

TWIDO

Unified Communications

usability

USER Applications

User Experience

User Interface

video platform

Virtualization

Visualization

Wearable Tech

Yellowfin