Organizations of all sizes are dealing with exponentially increasing data volume and data sources, which creates challenges such as siloed information, increased technical complexities across various systems and slow reporting of important business metrics. Migrating to the cloud does not solve the problems associated with performing analytics and business intelligence on data stored in disparate systems. Also, the computing power needed to process large volumes of data consists of clusters of servers with hundreds or thousands of nodes that can be difficult to administer. Our Analytics and Data Benchmark Research shows that organizations have concerns about current analytics and BI technology. Findings include difficulty integrating data with other business processes, systems that are not flexible enough to scale operations and trouble accessing data from various data sources.
Organizations today have huge volumes of data across various cloud and on-premises systems which keep growing by the second. To derive value from this data, organizations must query the data regularly and share insights with relevant teams and departments. Automating this process using natural language processing (NLP) and artificial intelligence and machine learning (AI/ML) enables line-of-business personnel to query the data faster, generate reports themselves without depending on IT, and make quick decisions. Some organizations have started using NLP in self-service analytics to quickly identify patterns and simplify data visualization. Our Analytics and Data Benchmark Research finds that about 81% of organizations expect to use natural language search for analytics to make timely and informed decisions.
Organizations today are working with multiple applications and systems, including enterprise resource planning (ERP), customer relationship management (CRM), supply chain management (SCM) and other systems, where data can easily become fragmented and siloed. And as the organization increases its data sources and adds more systems and custom applications, it becomes challenging to manage the data consistently and keep data definitions up to date. This increases the need to use master data management (MDM) software that can provide a single source of truth to drive accurate analytics and business operations.
The technology industry throws around a lot of similar terms with different meanings as well as entirely different terms with similar meanings. In this post, I don’t want to debate the meanings and origins of different terms; rather, I’d like to highlight a technology weapon that you should have in your data management arsenal. We currently refer to this technology as data virtualization. Other similar terms you may have heard include data fabric, data mesh and [data] federation. I’ll briefly discuss these terms and how I see them being used, but ultimately, I’d like to share with you some research that shows why data virtualization can be valuable, regardless of what you call it.
For decades, data integration was a rigid process. Data was processed in batches once a month, once a week or once a day. Organizations needed to make sure those processes were completed successfully—and reliably—so they had the data necessary to make informed business decisions. The result was battle-tested integrations that could withstand the test of time.
Alteryx Inspire 2019, this year's user conference for Alteryx, drew around 4500 customers, partners, and prospects to Nashville’s Gaylord Opryland Resort & Convention Center in Tennessee last month. The strong attendance was a reflection of the strong growth Alteryx has experienced over the last year; roughly 50% growth year-over-year. This year's conference focused on Alteryx's evolution from data preparation to AI and machine learning, and both were front and center.
Summit 2019, Information Builders' annual user conference, drew about 1000 attendees this year, including customers, partners and prospects all working with Information Builders' technologies. Under new leadership, Summit 2019 showcased the direction Information Builders is moving in the next couple of years.
Qonnections 2019 is Qlik's annual user conference. Key news from this year's conference centered on acquisitions of Podium Data and Attunity, along with an expansion of certifications on Google Cloud Platform, AWS, and Azure, with the ability to support Red Hat OpenShift. Many of these announcements were centered on a key theme of a cloud and SaaS-first approach.
Domopalooza 2019 marked the first annual user conference after Domo went public, but the energy, excitement and new feature announcements have not slowed. With thousands in attendance and growing fast, this year's conference focused on five key areas: digitization, real time connectivity, driving insight based actions, applying AI & machine learning, and building applications. All of these announcements are aimed at broadening the workloads supported by Domo.
Once again I attended Tableau's Users Conference, along with 17,000 other attendees, affectionately self-referred to as "data nerds". Pushing the envelope in data capabilities and access, Tableau introduced the "Ask Data" feature, allowing users to prose natural language queries and receive a response, along with new data preparation capabilities and other enhancements to help data analysts. Further, Tableau announced new developer enhancements including a new developer program to better align tools built for Tableau with Tableau's interface. For the full breakdown of Tableau User Conference 2018, and my analysis of all the largest announcements, watch my hot take video.