Setting The Record Straight – SAP HANA v. Exadata X3

Posted by Vishal Sikka on October 2, 2012

More by this author

At the Oracle Open World keynote this week, Oracle repeated what Hasso showed years ago – “Everything is in memory…Disk drives are becoming passé.” We are, of course, glad they realized this. With SAP HANA, our customers have been benefiting from this reality for more than 18 months now.

And yet Oracle made statements that are clear distortions and misrepresent HANA. It has become something of a recurring theme, to mis-state and distort things. As industry leaders, we must do better. It behooves us to tell the truth to our customers, our partners and our employees. We do not serve our stakeholders well by mis-statements and omissions of key things we know to be true. They deserve better. History deserves better. It is true that HANA represents a fundamentally new, rethought, database paradigm, and is receiving tremendous success in the market. Perhaps it is its disruptive nature that threatens the status quo of database incumbents. Perhaps it is some other reason. Regardless, I find myself once again setting the record straight.

The statement Mr Ellison made about HANA, when talking about the release of a new Exadata machine, that has 4TB of DRAM and 22TB of SSD, is false. He referred to HANA being “a small machine” with 0.5TB of memory. He said his machine has 26TB of memory, which is also wrong (SSD is not DRAM and does not count as memory, HANA servers also use SSDs for persistence).

Here is the truth:

  1. HANA systems range from the very small, to extremely large scale systems. HANA’s architecture, with full exploitation of massive-parallelism of multi-core systems, and native use of memory via new, totally redesigned data structures, enables nearly unlimited scalability.
  2. We are presently shipping, for the last several months, certified 16-node HANA hardware made by 4 vendors: IBM, HP, Fujitsu and Cisco. These systems are available for 16TB of DRAM, so they are already 4 times bigger than Oracle’s machine, and they have been in the market since spring of this year. The machines can take up to 32TB of DRAM, within their current configurations. In IBM’s case, with the Max5 configuration, they can go up to 40TB.
  3. The largest HANA system deployed so far is the 100-node system built earlier this year with Intel & IBM (see picture below). It has 100TB of DRAM and 4000 CPU cores. Mr. Ellison is welcome to come to Santa Clara and see the system working himself, with workloads from several customers. We shared this information in front of 10s of thousands of customers at our SAPPHIRE NOW event earlier this year. Already today this system can go up to 250TB of DRAM (and with HANA’s compression, can hold multiple Petabytes of data entirely in-memory). Our partner, Steve Mills, Senior Vice President and Group Executive of IBM’s Software & Systems, whose team helped build this system, had this to say in support of this open innovation, “IBM and SAP have partnered to demonstrate an SAP HANA system at 100 TB, making it the largest in-memory database system in the world. That system, running on 100 IBM X5 nodes, can now scale to 250 TB.”


With the processor and memory roadmaps from our partners, these systems will be doubling in their capacity by this fall/early 2013 (so multiply these numbers above by 2, etc.). And we don’t have to release new versions of hardware to take advantage of this innovation.

HANA is built on a simple notion: advances in hardware, and deep research into the nature of modern enterprise software, enable us to rethink the database. And we did. I treated this notion as a design principle for HANA’s construction. Others are trying to protect their database systems that were designed in the past. The use of new SSD access technology, which accelerates access to flash and demonstrates performance improvements, simply reinforces this point. HANA also benefits from this technology, for reading logs, for restart performance, etc. as do our ASE and IQ databases. But HANA is built on a basic principle that Hasso had articulated many years ago: when we run everything in-memory, we can get predictable response times on even the most complex queries and algorithms, and everything can execute with massive parallelism. This power gives us the freedom to rethink enterprise software. To renew existing systems without disruption: from OLTP apps (such as our Business One product that we released on HANA last week), to Analytics, from structured data processing, to unstructured. To rethink systems to run 1000s of times faster, and to eliminate batch jobs everywhere. It also gives us the ability to build completely new applications, unprecedented solutions. To help software simplify the world, and connect it better, in real-time: from genome sequencing to energy exploration, from real-time customer intimacy, to inclusive banking. To liberate us from the confines and limitations of systems of the past, and enable us to be limited only by our imagination.

As Alan Kay always told me, the future does not just have to be an increment of the past. We choose to focus on the future, on building a highly desirable, feasible and viable future, with our hands, with our customers and partners. Instead of focusing on incrementing limited systems of the past with temporary technologies. And we think there is no room for lies in that world. The truth of a HANA based landscape, and its unmistakable success, is open to all, and it is ours to take and build on.

Happy HANA.



VN:F [1.9.22_1171]
Average User Rating
Rating: 3.0/5 (1 vote cast)
Setting The Record Straight - SAP HANA v. Exadata X3 , 3.0 out of 5 based on 1 rating