We’ve recently published our latest Benchmark Research on Data Governance and it’s fair to say, “you’ve come a long way, baby.” Many of you reading this weren’t around when that phrase was introduced in 1968 to promote Virginia Slims cigarettes, but you may have heard the phrase because it went on to become a part of popular culture. We’ve learned a lot about cigarettes since then, and we’ve learned a lot about data governance, too.
Organizations face various challenges with analytics and business intelligence processes, including data curation and modeling across disparate sources and data warehouses, maintaining data quality and ensuring security and governance. Traditional processes are slow when transforming large and diverse datasets into something which is easily consumable in BI. And, it can take days or weeks to create reports and dashboards — maybe longer if processes change and new data sources are introduced. Our Analytics and Data Benchmark Research shows that the most time-consuming processes are preparing data, reviewing it for quality issues and preparing reports for presentation and distribution.
Natural language processing (NLP) is a field that combines artificial intelligence (AI), data science and linguistics that enables computers to understand, interpret and manipulate text or spoken words. NLP includes generating narratives based on a set of data values, using text or speech as inputs to access information, and analysing text or speech, for instance, to determine its sentiment. There are various techniques for interpreting human language, ranging from statistical and machine learning (ML) methods to rules-based and algorithmic approaches. In this perspective, we will focus on two aspects of NLP: natural language query (NLQ), which offers the ability to use natural language expressions to discover and understand data, and natural language generation (NLG), which uses AI to produce written or spoken narratives from a dataset. NLQ and NLG enable business personnel to communicate information needs with business intelligence (BI) systems more easily.
Having just completed the 2021 Ventana Research Value Index for Analytics and Data, I want to share some of my observations about how the market has advanced since our assessment two years ago. The analytics software market is quite mature and products from any of the vendors we assess can be used to effectively deliver information to help your organization improve its operations. However, it’s also interesting to see how much the market continues to advance and how much investment vendors continue to make.
Topics: Big Data, embedded analytics, Analytics, Business Collaboration, Business Intelligence, Collaboration, natural language processing, Conversational Computing, AI and Machine Learning, collaborative computing, mobile computing
I am happy to share insights gleaned from our latest Value Index research, an assessment of how well vendors’ offerings meet buyers’ requirements. The Ventana Research Value Index: Analytics and Data 2021 is the distillation of a year of market and product research by Ventana Research. Drawing on our Benchmark Research, we apply a structured methodology built on evaluation categories that reflect the real-world criteria incorporated in a request for proposal to analytics and data vendors supporting the spectrum of business intelligence. Using this methodology, we evaluated vendor submissions in seven categories: five relevant to the product experience ﹘ adaptability, capability, manageability, reliability and usability ﹘ and two related to the customer experience ﹘ TCO/ROI and vendor validation.
In this analyst perspective, Dave Menninger takes a look at data lakes. He explains the term “data lake,” describes common use cases and shares his views on some of the latest market trends. He explores the relationship between data warehouses and data lakes and share some of Ventana Research’s findings on the subject. He also provides an assessment of the risks organizations face in working with data lakes and offers recommendations for maximizing the potential of data.
I was recently asked to identify key modern data architecture trends. Data architectures have changed significantly to accommodate larger volumes of data as well as new types of data such as streaming and unstructured data. Here are some of the trends I see continuing to impact data architectures.
MicroStrategy recently held its annual user conference, which focused on the theme of the “Intelligent Enterprise.” HyperIntelligence, an innovative product for delivering analytics throughout organizations that they introduced a year ago, was the star of the event. The company announced enhancements to HyperIntelligence and the latest version of its flagship platform, MicroStrategy 2020, as well as a new two-tiered education and certification program.
The Oracle Analytics Summit 2019 was the inaugural user event for Oracle Analytics customers, and they also broadcast the video for thousands of others. You can watch the keynote at https://www.youtube.com/watch?v=eY0IPNqzsy4. Executives talked about some big organizational changes, including Bruno Aziza joining last year to lead the analytics organization. This event marked a transition and "a new beginning" for the Oracle Analytics portfolio, as the company announced three new analytics products.
Alteryx Inspire 2019, this year's user conference for Alteryx, drew around 4500 customers, partners, and prospects to Nashville’s Gaylord Opryland Resort & Convention Center in Tennessee last month. The strong attendance was a reflection of the strong growth Alteryx has experienced over the last year; roughly 50% growth year-over-year. This year's conference focused on Alteryx's evolution from data preparation to AI and machine learning, and both were front and center.