Process-mining software isn’t exactly new, but it’s also not widely known in the software technology market. The discipline has been around for at least a decade, but is generating more interest these days with both specialist vendors and major enterprise software vendors offering process-mining products and services. We assert that through 2022, 1 in 4 organizations will look to streamline their operations by exploring process mining.
The industry is making huge strides with artificial intelligence (AI) and machine learning (ML). There is more data available to analyze. Analytics vendors have made it easier to build and deploy models, and AI/ML is being embedded into many types of applications. Organizations are realizing the value that AI/ML provides and there are now millions of professionals with AI or ML in their title or job description. AI/ML is even being used to make many aspects of itself easier. Organizations that want to build and deploy their own AI/ML models need to be realistic about the capabilities that are available today. As a practical matter, organizations should anticipate that a robust AI/ML deployment in the current environment requires a set of specialized skills and operational processes, including data operations (dataops) and ML operations (MLops). Collaboration across these disciplines and processes is also required.
Businesses are transforming their organizations, building a data culture and deploying sophisticated analytics more broadly than ever. However, the process of using data and analytics is not always easy. The necessary tools are often separate, but our research shows organizations prefer an integrated environment. In our Data Preparation Benchmark Research, we found that 41% of participants use Analytics and Business Intelligence tools for data preparation.
Topics: embedded analytics, Analytics, Business Intelligence, Collaboration, Data Preparation, Information Management, Internet of Things, Data, Digital Technology, natural language processing, Conversational Computing, AI and Machine Learning
A data lake is a centralized repository designed to house big data in structured, semi-structured and unstructured form. I have been covering the data lake topic for several years and encourage you to check out an earlier perspective called Data Lakes: Safe Way to Swim in Big Data? for background. Our data lake research has uncovered some points to consider in your efforts, and I’d like to offer a deeper dive into our findings.
Every organization performing analytics with multiple employees needs to collaborate. They should be collaborating in the analytics process and in communicating the results of those analyses. As I continue my evaluation of analytics and data vendors, I have to admit some disappointment at the level of collaborative capabilities some analytics vendors provide. To be fair, the level of capabilities vary widely, but I expected collaborative capabilities to be more uniformly available as a standard feature in analytics technologies by now. I had anticipated that three-quarters of analytics vendors would include collaboration capabilities. More than half the vendors I have evaluated support some comments and discussion in their products, only a few have incorporated social recognition and wall posting as part of their collaborative capabilities. So, what impact does a lack of analytics collaboration have on organizations undergoing digital transformation?
Artificial intelligence (AI) and machine learning (ML) are all the rage right now. Our Machine Learning Dynamic Insights research shows that organizations are using these techniques to achieve a competitive advantage and improve both customer experiences and their bottom line. One type of analysis an organization can perform using AI and ML is predictive analytics. Organizations also need to plan their operations to predict the amount of cash they will need, inventory levels and staffing requirements. Unfortunately, while planning begins with predictions, organizations can’t plan with AI and ML. Let me explain what I mean.
I was recently asked to identify key modern data architecture trends. Data architectures have changed significantly to accommodate larger volumes of data as well as new types of data such as streaming and unstructured data. Here are some of the trends I see continuing to impact data architectures.
MicroStrategy recently held its annual user conference, which focused on the theme of the “Intelligent Enterprise.” HyperIntelligence, an innovative product for delivering analytics throughout organizations that they introduced a year ago, was the star of the event. The company announced enhancements to HyperIntelligence and the latest version of its flagship platform, MicroStrategy 2020, as well as a new two-tiered education and certification program.
Organizations’ use of data and information is evolving as the amount of data and the frequency with which that data is collected increase. Data now streams into organizations from myriad sources, among them social media feeds and internet-of-things devices. These seemingly ever-increasing volumes of devices and data streams offer both challenges and opportunities to capture information about a business and improve its operations.
The emerging internet of things (IoT) is an extension of digital connectivity to devices and sensors in homes, businesses, vehicles and potentially almost anywhere. This innovation means that virtually any appropriately designed device can generate and transmit data about its operations, which can facilitate monitoring and a range of automatic functions. To do this IoT requires a set of event-centered information and analytic processes that enable people to use that event information to make optimal decisions and take act effectively.