HANA vs. Teradata – Part 2

Posted by Robert Klopp on September 26, 2012

More by this author

Teradata has formulated a position that declared HANA as “hype” and has suggested that SAP is acting irrationally based on a formula that suggests that data warehouses are growing at a rate of 40% per year and the cost of memory is falling at a rate of only 20% per year (they said 30% every 18 months in their post). This is a silly argument in fact when you see how silly you will laugh out loud.

Using the Five Minute Rule we suggest here that if there is no compression a table that is accessed at least once every 50 minutes should economically be stored in memory. If the data is compressed 2X then it should be in-memory if it is accessed every 100 minutes, if the data is compressed 4X then it should be in-memory if it is scanned every 200 minutes and so on. Note that this is true regardless of the size of the table and based on the economics of the hardware.

Based on price and performance we suggested here that it does not matter how often the table is accessed, it should be in-memory. The arguments for both are based on architecture and are self-evident, furthermore they are architectural statements of fact, not marketing.

Other vendors have no legitimate argument against the 5 Minute Rule: it is a Rule.

But to be fair, Teradata or others might argue that price and performance is not the right measure. They could suggest that adequate performance on their system is possible for a lower price. This is an odd position for them to take as price and performance has been their mantra, but it is a reasonable point. You can elect to accept sub-optimal performance for a lessor price. We would, of course argue that there is a cost to sub-optimal performance like users are less productive, new real-time use cases cannot be built etc.

But for now let’s go with the numbers and suggest a 100TB HANA data warehouse for you because today, based on the two papers, HANA is economically justified.

In a year, according to Teradata, your database grows 40% to 140TB and the cost of memory drops 20%. You then re-evaluate the economics and the 20% memory drop makes HANA even more competitive so you stick with HANA.

In the following year your data warehouse grows 40% to just about 200TB and the cost of memory drops 20% making HANA even more economically attractive. And so on… You didn’t expect this, did you?

Teradata has made the case for HANA for us with their numbers but they spun the numbers with a lack-of-logic that sounds compelling if you don’t work it through. In fact, the economics in support of HANA are truly compelling and this lack-of-logic is easily dismissed.

VN:F [1.9.22_1171]
Average User Rating
Rating: 4.7/5 (3 votes cast)
HANA vs. Teradata – Part 2, 4.7 out of 5 based on 3 ratings

11220 Views