Qonnections 2019 is Qlik's annual user conference. Key news from this year's conference centered on acquisitions of Podium Data and Attunity, along with an expansion of certifications on Google Cloud Platform, AWS, and Azure, with the ability to support Red Hat OpenShift. Many of these announcements were centered on a key theme of a cloud and SaaS-first approach.
Organizations now must store, process and use data of significantly greater volume and variety than in the past. These factors plus the velocity of data today — the unrelentingly rapid rate at which it is generated, both in enterprise systems and on the internet — add to the challenge of getting the data into a form that can be used for business tasks.
This year, Teradata rebranded the Teradata users conference from "Partners" to "Analytics Universe", and there is a reason for it. For decades, Teradata has represented the high end of the analytic database, but new innovations and technologies are adding flexibility to Teradata's licensing as they compete. For the full breakdown of Teradata's Analytics Universe 2018, and my analysis of all the largest announcements, watch my hot take video.
We at Ventana Research recently published our research agendas for 2018. The world of data and information management continues to evolve, as does our research on the use of these technologies to improve your organization’s operations. Relational databases are no longer the only viable enterprise data store as more organizations adopt a polyglot database infrastructure. And while their exact form may still be changing, as I have recently written, big data technologies are here to stay. Our Data and Analytics in the Cloud Benchmark Research indicates that an increasing number of organizations are opting for cloud-based deployments: A modern data infrastructure includes a hybrid of on-premises and cloud deployments for 44 percent of organizations. Our upcoming research will track how these changes are affecting data- and information-management processes.
Big data has become an integral part of information management. Nearly all organizations have some need to access big data sources and produce actionable information for decision-makers. Recognizing this connection, we merged these two topics when we put together our recently published research agendas for 2017. As we plan our research, we focus on current technologies and how they can be used to improve an organization’s performance. We then share those results with our readers.
Topics: Big Data, Data Science, Analytics, Data Governance, Data Integration, Data Preparation, Information Management, Internet of Things, Machine Learning and Cognitive Computing, Machine Learning Digital Technology
The business intelligence market is bounded on one side by big data and on the other side by data preparation. That is, to maximize their performance in using information, organizations have to collect and analyze ever increasing volumes of data while the tools available are constantly evolving in the big data ecosystem that I have written about. In our benchmark research on big data analytics, half (51%) of organizations said they want to access big data using their existing BI tools. At the same time, as I have noted, end users are demanding self-service access to data preparation capabilities to facilitate their analyses.
Teradata recently held its annual Partners conference, at which gather several thousand customers and partners from around the world. This was the first Partners event since Vic Lund was appointed president and CEO in May. Year on year, Teradata’s revenues are down about 5 percent, which likely prompted some changes at the company. Over the past few years Teradata made several technology acquisitions and perhaps spread its resources too thin. At the event, Lund committed the company to a focus on customers, which was a significant part of Teradata’s success in the past. This commitment was well received by customers I spoke with at the event.
It’s part of my job to cover the ecosystem of Hadoop, the open source big data technology, but sometimes it makes my head spin. If this is not your primary job, how can you possibly keep up? I hope that a discussion of what I’ve found to be most important will help those who don’t have the time and energy to devote to this wide-ranging topic.
Data virtualization is not new, but it has changed over the years. The term describes a process of combining data on the fly from multiple sources rather than copying that data into a common repository such as a data warehouse or a data lake, which I have written about. There are many reasons for an organization concerned with managing its data to consider data virtualization, most stemming from the fact that the data does not have to be copied to a new location. It could, for instance, eliminate the cost of building and maintaining a copy of one of the organization’s big data sources. Recognizing these benefits, many database and data integration companies offer data virtualization products. Denodo, one of the few independent, best-of-breed vendors in this market today, brings these capabilities to big data sources and data lakes.
Qlik helped pioneer the visual discovery market with its QlikView product. In some respects, Qlik and its competitors also spawned the self-service trend rippling through the analytics market today. Their aim was to enable business users to perform analytics for themselves rather than building a product with the perfect set of features for IT. After establishing success with end users the company began to address more of the concerns of IT, eventually creating a robust enterprise-grade analytics platform. This approach has worked for Qlik, driving growth that led to an initial public offering in 2010. The company now generates more than half a billion dollars in revenue annually, making it one of the largest independent analytics vendors. Of which based on their company and products was rated a Hot Vendor in our 2015 Value Index on Analytics and Business Intelligence and one of the highest ranked in usability.