In 2017 Strata + Hadoop World was changed to the Strata Data Conference. As I pointed out in my coverage of last year’s event, the focus was largely on machine learning and artificial intelligence (AI). That theme continued this year, but my impression of the event was of a community looking to get value out of data regardless of the technology being used to manage that data. The change was subtle: The location was the same; the exhibitors were largely the same; attendance was similar this year and last. But there was no particular vendor or technology dominating the event.
Topics: Analytics, Business Intelligence, Data Science, Big Data, Data Integration, Data Governance, Data Preparation, Information Optimization, Machine Learning, Digital Technology, Machine Learning and Cognitive Computing
All too often, software vendors view analytics as the end rather than the beginning of a process. I’m reminded of some of the advanced math classes I’ve taken in which the teaching process focused on a few key aspects of a mathematical proof or solution, leaving the rest of the exercise to be worked out by the students. In other contexts, you may hear people say the numbers speak for themselves.
We at Ventana Research recently published our research agendas for 2018. Analytics and business intelligence are evolving and so is our research on their use across practice areas. Earlier research has shown that analytics can deliver significant value to organizations; for example, our predictive analytics research shows that 57 percent of organizations reported achieving a competitive advantage and half created new revenue opportunities with predictive analytics. Waves of investment in self-service analytics have propelled the market for analytics tools, significantly empowering line-of-business organizations to create their own analytics and set their own analytic priorities. But organizations are also beginning to recognize some of the limitations of current analytics implementations – for self-service, for example. Our Data Preparation Benchmark Research reveals that fewer than half (42%) of organizations are comfortable allowing business users to work with data not prepared by IT. Our research this year will continue to explore both the successes and challenges organizations face as they continue to use analytics and BI.
I recently attended SAP TechEd in Las Vegas to hear the latest from the company regarding its analytics and business intelligence offerings as well as its data management platform. The company used the event to launch SAP Data Hub and made several other data and analytics announcements that I’ll cover below.
The Strata Data Conference is changing and it’s changing in a good way. At the recent Strata Data Conference in New York, Mike Olson, chief strategy officer at Cloudera, which co-sponsored the event, commented that at prior events we used to talk about the “Hadoop zoo animals,” meaning the various components of the Hadoop ecosystem of which I have written previously. Following last fall’s Strata event, I observed that the conference was evolving to focus on the use of data. Advancing that evolution, this year’s event focused on a particular type of usage: artificial intelligence (AI) and machine learning. The evolution from a focus on zoo animals to a focus on business value using advanced analytics shows further maturation of the big data market.
Recently Hortonworks announced some significant additions to its products at the DataWorks Summit. These additions reflect the fact that the big data market continues to evolve, as I have previously written.
Natural language generation (NLG), the process of generating text or narratives based on a set of data values, can reach a broader audience. NLG narratives can be used for a variety of purposes, but in this perspective I focus on how NLG can be used to enhance business intelligence (BI) processes. In the case of BI, NLG can be used to explain what has happened and why it is happening, and even what actions to take. The NLG narratives can be understood by a broader range of business users than the tables and charts of data that are the typical output of most BI applications or analytics tools.
Big data initially was characterized in terms of “the three V’s,” volume, velocity and variety. Nearly five years ago I wrote about the three V’s as a way to explain why new and different technologies were needed to deal with big data. Since then the industry has tackled many of the technical challenges associated with the three V’s. In 2017 I propose that we focus instead on a different letter, which includes these A’s: analytics, awareness, anticipation and action. I’ll explain why each is important at this stage of big data evolution.
IBM recently held its inaugural World of Watson event. Formerly known as IBM Insight, and prior to that IBM Information on Demand, the annual event, attended by 17,000 people this year, showcases IBM’s data and analytics and the broader IBM efforts in cognitive computing. The theme for the event, as you might guess, was the Watson family of cognitive computing products. I, for one, was glad to spend more time getting to know the Watson product line, and I’d like to share some of my observations from the event.
I recently attended .conf2016, Splunk’s seventh annual user conference. Splunk created the market for analyzing machine data (shorthand for machine-generated data), which consists of log files and event data from various types of systems and devices. Our big data analytics benchmark research shows that these are two of the most common sources of big data that organizations analyze. This market has proven to be fertile ground for Splunk, growing steadily with revenues more than doubling over the previous two fiscal years. Machine data is also the backbone for the Internet of Things (IoT) and operational intelligence, which form the basis of forthcoming benchmark research from Ventana Research.