The big data market continues to expand and enable new types of analyses, new business models and new revenues streams for organizations that implement these capabilities. Following our previous research into big data and information optimization, we’ll investigate the technology trends affecting both of these domains as part of our 2016 research agenda.

A key tool for deriving value from big data is in-memory computing. As data is generated, organizations can use the speed of in-memory computing to accelerate the analytics on that data. Nearly-two thirds (65%) of participants in our big data analytics benchmark research identified real-time analytics as an important aspect of in-memory computing. Real-time analytics enables organizations to respond to events quickly, for instance, minimizing or avoiding the cost of downtime in manufacturing processes or rerouting deliveries that are in transit to cover delays in other shipments to preferred customers. Several big data vendors offer in-memory computing in their platforms.

Predictive analytics and machine learning also contribute to information optimization. These analytic techniques can automate some decision-making to improve and accelerate business processes that deal with large amounts of data. Our new big data benchmark research will investigate the use of predictive analytics with big data, among other topics. In combination with our upcoming data preparation benchmark research, we’ll explore the unification of big data technologies and the impact on resources and tools needed to successfully use big data. In our previous research, three-quarters of participants said they are using business intelligence tools to work with big data analytics. We will look for similar unification of other technologies with big data.

vr_Big_Data_Analytics_03_technology_for_big_data_analyticsThe emergence of the Internet of Things (IoT) – an extension of digital connectivity to devices and sensors in homes, businesses, vehicles and potentially almost anywhere – creates additional volumes of data and brings pressure for data in motion for both analytics and operations. That is, the data from these devices is generated in such volumes and with such frequency that specialized technologies have emerged to tackle these challenges. We’ll explore in depth the myriad issues arising from this explosion of connectivity in our benchmark research on the Internet of Things and Operational Intelligence this year.

Another key trend we will explore is the use of data preparation and information management tools to simplify accessibility to data. Data preparation is a key step in this process, yet our data and analytics in the cloud benchmark research reveals that data preparation requires too much time: More than half (55%) of participants said they spend the most time in their analytic process preparing data for analysis. Virtualizing data access can accelerate access to data and enables data exploration with less investment than is required to consolidate data into a single data repository. We will be tracking adoption of cloud-based and virtualized integration capabilities and increasing use of Hadoop as a data source and store for processing of big data. In addition, our research will examine the role of search, natural language and text processing.

We suggest organizations develop their big data competencies for continuous analytics – collecting and analyzing data as it is generated. It should start with establishing appropriate data preparation processes for information responsiveness. Data models and analyses should support machine learning and cognitive computing to automate portions of the analytic process. Much of this data will have to be processed in real time as it is being generated. All of these advances will need advanced methods for big data governance and master data management. We look forward to reporting on developments in these areas throughout 2016 in our Big Data and Information Optimization Research Agenda.

Regards,

David Menninger

SVP & Research Director

Some followers of Ventana Research may recall my work here several years ago. Here and elsewhere I have spent most of my career in the data and analytics markets matching user requirements with technologies to meet those needs. I’m happy to be returning to Ventana Research to resume investigating ways in which organizations can make the most of their data to improve their business processes; for a first look, please see our 2016 research agenda on Big Data and Information Optimization. I relish the opportunity to conduct primary market research in the form of Ventana’s well-known benchmark research and to help end users and vendors apply the information collected in those studies.

Much has happened since I was previously part of thevr_Big_Data_Analytics_18_big_data_technology_in_use Ventana Research team. One major change is the explosive growth in the use and acceptance of big data. For example, when I conducted the first benchmark research on big data, only 22 percent of participants were using Hadoop in production. Our more recent research shows more than 50 percent growth in the number of respondents using Hadoop in production (which is now 37%).

In the area of predictive analytics, another research study from my prior tenure identified a skills shortage. This shortage was identified in several ways. More than four out of five (83%) participants indicated that users did not have enough skills training, and more than half (58%) said they didn’t understand the mathematics needed to produce their own analyses. In the interim the numerous university programs have begun to help address this shortage. A Google search shows that New York University, Columbia, Indiana University, Wesleyan, University of Washington, University of Michigan, University of Rochester and University of Texas San Antonio have created data science programs – and this is just page one of the search results. I anticipate that these will be increasingly popular programs as the rise of big data will continue to drive demand for these skills. For the time being, our more recent study on predictive analytics suggests that these skill shortages still exist with very similar responses of 79 percent and 66 percent, respectively. I’ll be continuing to watch these and other analytics issues noted in our 2016 Business Analytics Research Agenda.

vr_NG_Predictive_Analytics_16_why_users_dont_produce_predictive_analysesWe have also seen a sea change in the acceptance of open source software in enterprises. I think it is fair to say that open source helped drive the growth in big data with various projects including Cassandra, Hadoop, MongoDB and Spark enabling organizations to experiment with large volumes of data before making significant license purchases to put those systems into production. The open source momentum is further evidenced by some large vendors taking formerly proprietary, “closed source” technologies and making them open source. Perhaps the biggest example is Microsoft making its .NET technology open source. My former employer Pivotal also converted its data management products, in which it had invested more than 10 years of proprietary development, to open source versions.

Another notable change is the growth of interest in the Internet of Things (IoT). Many years ago I considered a position with a vendor that helped organizations manage RFID data. Adoption was slow at the time, in part because of the cost of RFID tags but also because of the cost and challenges of collecting and analyzing very large volumes of data. As big data technologies have grown, so too has interest in IoT. Technologies exist today to make processing such large amounts of data possible in the time frames and at costs that make it practical to consider how instrumentation of devices can be used to enhance business performance. We’ll be undertaking specific research on this topic in 2016: See our Big Data and Information Optimization Research Agenda.

If he were alive today, Charles Darwin might have noted the emergence of a new species: the Unicorn, which Wikipedia defines as a startup company, often software-based, whose valuation exceeds US$1 billion. You might wonder how this financial trend impacts our research and the advice we provide. The answer is that such valuations have the potential to alter the behavior of the markets we cover. They provide these startup vendors access to funding great enough to change the competitive landscape. Such investments can put pressure on existing vendors to step up their game. In some cases it can also cause consolidation in the market or even cause certain vendors to exit markets, such as Intel did when it decided to get out of the Hadoop distribution market. At Ventana Research we are ready to help end-user organizations evaluate whether the unicorns are ready for prime time and how their existence might impact their existing software vendors. One way in which we help in this process is with our Ventana Research Value Indexes, which provide fact-based assessments of various software products within a variety of market segments.

So I hope you’ll pardon the interruption in our conversation. It’s good to be back, and I am looking forward to working with the entire Ventana Research team to provide research and insights that will help guide your use of technology to improve your business decisions and processes.

Regards,

David Menninger

SVP & Research Director

Follow on WordPress.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 14 other followers

David Menninger – Twitter

Ventana Research

Top Rated

Blog Stats

  • 41,054 hits
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: