You are currently browsing the tag archive for the ‘Data Warehousing’ tag.

This is the second in a series of posts on the architectures of analytic databases. The first post addressed massively parallel processing (MPP) and database technology. In this post, we’ll look at columnar database technology. Given the recent announcement of HP’s plan to acquire Vertica, a columnar database vendor, there is likely to be even more interest in columnar database technology, how it operates and what benefits it offers.

Fundamentally, columnar database technology offers two primary benefits, increased speed and reduced storage requirements. We repeatedly emphasize the importance of speed to end users. Our benchmark research on business intelligence and performance management show that performance is a key consideration among those seeking improvement in this area. Besides speed and reduced storage other benefits may exist in particular implementations, but these are the two most significant – and most directly attributable to columnar technology.

Columnar technology was made popular by Sybase, now part of SAP, with its IQ product. Today, many other vendors have brought columnar database products to market, among them 1010data, Calpont, Infobright, ParAccel, Sand Technology, SenSage and Vertica who was just acquired by HP. Recently, traditional row-oriented database vendors – among them Aster Data now being acquired by Teradata, EMC Greenplum and Oracle – have added some columnar capabilities to their products including. Additionally, in-memory database technologies frequently utilize column-based architectures.

Columnar database technology typically includes columnar storage and/or columnar database execution. We’ll talk about both and why each is important.

Columnar storage turns the traditional relational (row-oriented) database on its side. Instead of storing data in rows, which is the norm for databases such as IBM DB2, Microsoft SQLServer, MySQL and Oracle, data is stored by column. Whereas a row would consist of customer name, order date, amount of the order, order number, shipment date, method of shipment, and so on, a column of data would consist of all the customer names. A separate column would contain all the order dates. Another would contain all the order amounts, and so on. Row-oriented storage is more efficient (that is, faster) when recording or retrieving the data necessary to process a transaction. The disk only needs to go to one location for either operation. Column-oriented storage, on the other hand, is more efficient (that is, faster) when querying the same item across many rows of data, which is common in most business intelligence (BI) queries. Since all the similar items are grouped together on disk the database can scan them more rapidly than if it had to retrieve all the fields in a record just to get at one or two of them. The difference is magnified when querying hundreds of millions or billions of rows.

Columnar organization yields several storage-related benefits as well. First, columnar databases make little or no use of user-defined indexes, so they require no additional storage other than that for the data. Second, once the data is sorted and stored by column, the potential for compression increases dramatically. One simple example I like to use is to imagine storing billions of stock quotes, instances of web activity, network activity or any other type of data that includes a date field. Even if you kept five years of history, there are only 1826 or 1827 (depending on leap years) unique date values in that time period. With a row-oriented storage approach, you would need to store all of those billions of values on disk. Using a columnar storage technique you could potentially store each date only once and record the number of occurrences for that date. This technique, referred to as run-length encoding, can lead to dramatic compression ratios, as suggested by the example. Other techniques are available too but I won’t go into those at this point. What is important to note is that the benefits of compression are significant enough that row-oriented vendors have figured out how to engineer some these same techniques into their storage algorithms.

Let’s move on to the second major aspect of columnar databases, columnar execution. Columnar databases not only can reduce the amount of storage required, but also in many cases can reduce the amount of memory required to process the data. If the compact representation of the data can be retained as queries are processed, it follows that less memory is required to manipulate the data. Without going into all the details, some columnar database engines can select subsets of data and perform joins on the compact representation, resulting in even more efficiency and performance gains.

Columnar databases do have their downsides. They are typically less efficient when it is necessary to update or delete data, for several reasons. First and foremost, updating or deleting a single row of data requires finding several locations on disk where the individual columns are stored. Even single row retrievals can be slower, resulting in a noticeable performance difference. I know of one financial service firm that has spreadsheets with thousands of individual data references in separate cells of the spreadsheets they use. Even though each individual lookup only takes a small fraction of a second longer, the overall performance is much slower because the difference is magnified many times over. The second reason updates or deletes can be slower is the organization of the data. If a single value in the middle of a long list of values is deleted or updated, some portion of the page needs to be reorganized. Depending on the vendor’s approach this reorganization issue can become significant over time as more and more updates or deletes are processed.

Because of these issues with columnar databases, at least two hybrid implementations have emerged. One form of hybrid is a row-oriented database with some columnar capabilities added, such as those noted above. The other form is a columnar database with some row-oriented capabilities added on. Although a hybrid architecture minimizes the downsides of each approach, I suggest that you think about it this way: While a hybrid minimizes the downsides it does not eliminate them.

The bottom line is that your database engine will be primarily column- or row-oriented, and you should consider that in the selection and evaluation process. Over time I suspect the hybrid techniques may approach each other in terms of performance and capabilities, but for the time being I think you will still see some differences depending on the specific use case and workload you are trying to manage.

Columnar databases can also be implemented as MPP (massively parallel processing) systems, as hardware appliances or as in-memory systems. Calpont, Paraccel and Vertica are examples of MPP columnar databases. Kickfire, acquired by Teradata, is probably the best example of an appliance-based columnar system, while the SAP High-Performance Analytic Appliance (HANA) is probably the most ambitious in-memory columnar system.

I hope you can use this information as you investigate database vendors and evaluate their product offerings. Consider whether you need columnar storage alone or if you need columnar execution as well. Consider whether the cost of learning and managing a separate system is worth the benefits. Consider your current and future use cases and how they match a row-versus-column orientation. Regardless of what you think you understand about the different approaches make sure to conduct a proof of concept with your data and your workload. As you might imagine from this article, the differences between these approaches can be subtle and a proof of concept is likely to be the only way to evaluate which approach works best for you.

Regards,

David Menninger – VP & Research Director

It’s clear that now we are living in the era of big data. The stores of data on which modern businesses rely are already vast and increasing at an unprecedented pace. Organizations are capturing data at deeper levels of detail and keeping more history than they ever have before. Managing all of the data is thus emerging as one of the key challenges of the new decade.

The solutions to this challenge vary, but interest in them seems to be universal. The largest database vendors and others that wish to compete with them are developing or acquiring various technologies, among them database appliances, massively parallel databases, and columnar databases.

I have written recently about another contender in this arena: an open source, parallel processing technique named Apache Hadoop, which has gained popularity as a way to deal with very large sets of data. The rise of Hadoop has been dramatic. It has been successfully deployed at some of the largest Internet-based organizations in the world, including eBay, Facebook, Google and Yahoo. Seeing this, other organizations whose business depends on managing large amounts of data have begun to explore Hadoop as well.

Last October 12 in New York City, Hadoop World attracted approximately 900 attendees. Local Hadoop user groups are popping up around the world and attracting hundreds of users . The Hadoop User Group on LinkedIn has more than 4,000 members. More and more software vendors area adding support for Hadoop. Daniel Abadi, who has done extensive work on a project called HadoopDB , recently received funding for a big-data start-up company.

However, because the level of interest in Hadoop has grown so rapidly, there is little research on the business applications that use Hadoop and the IT requirements necessary to support a successful Hadoop deployment. Many audiences could benefit from independent research, information and analysis in this area. CIOs considering Hadoop need more information about the business case and the characteristics of successful deployments. Organizations developing Hadoop and related software products need to understand better the characteristics of the developing markets they seek to serve. And software companies looking to incorporate Hadoop need guidance on how best to adapt their own product lines.

Another consequence of the dramatic rise in popularity of Hadoop is much confusion among vendors as well as buyers. The rapidly changing market situation makes it difficult for vendors to determine how to allocate investments in Hadoop in ways that will maximize revenues. Without reliable information, vendors are left to guess at what potential customers need, and so they risk creating a flawed strategy based on inaccurate assumptions about the market.

To help address these issues, Ventana Research is conducting benchmark research that will gather data on the enterprise use of Hadoop, its key components and related technologies; we expect it to provide previously unavailable insights. The research will establish how Hadoop is being used and for what business purposes. It also will explore whether Hadoop is replacing existing technologies and, if so, which ones. In addition, this program will identify benefits that organizations are realizing from the use of Hadoop as well as issues they have encountered.

Background about this benchmark research can be found here as will the results once they are available and if you are looking to participate.  Whether you are working with Hadoop or using some other technology to tackle your big-data challenges, please participate in this research to help advance the state of the industry with respect to managing and analyzing large-scale data.

Regards,

David Menninger – VP & Research Director

Follow on WordPress.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 20 other followers

David Menninger – Twitter

Ventana Research

Top Rated

Blog Stats

  • 45,824 hits
%d bloggers like this: