The impact of Intel 3D X-Point to SAP HANA

John Appleby

Posted by John Appleby on August 3, 2015

Global Head Of DDM/HANA COEs

More by this author

A month ago, Hasso blogged about The Impact of Haswell on HANA. I agree with him of course, Intel’s partnership with SAP on Haswell is important. That said, once we had Westmere, Ivy Bridge and Haswell were incremental improvements. In CPU technologies, incremental improvements are incredibly important, in this case they improved performance by nearly 3x per socket over 3 years – in line with Moore’s Law.

But let’s get to the real issues of running an in-memory database. There are two big ones.

1) DRAM Volatility

SAP HANA is an in-memory database, and current DRAM is volatile – that means that when you turn off the power on the system, the data is lost. HANA works around this by adding a non-volatile disk storage layer to keep the data if the power goes off – ensuring that the important Durability property of ACID databases is retained.

Back in the early days of computing, you could buy DRAM, and SRAM – Dynamic and Static, of similar sizes, but it turned out the DRAM was a winner from a performance and reliability perspective so it has become the common factor in all current systems (including the laptop or mobile device you are using to read this).

But let’s face it, DRAM volatility sucks, because it means you have to keep the power on to keep the contents of memory.

2) Size and Price

In my experience, customers usually think they are bigger than they are from a database perspective, but that said, DRAM is expensive. The hardware costs around $60-80k/TB, and if you need a 12TB SAP HANA system, and 3 copies of it for testing purposes that’s a total of 48TB of DRAM, that’s a hardware cost of $3m+.

The value of HANA is enormous for many use cases, and so that kind of spend is well justified, but in a lot of enterprise systems, not all of that 12TB is really high value.

SAP has looked to resolve this by adding various types of lower-cost storage and moves the data round with Data Temperature Management, but this requires a separate database, which has a cost and complexity associated with it.

Enter Intel 3D X-Point (3DXP)

Intel 3D X-Point sits at the intersection between SSD and DRAM. It’s faster than SSD and slower than DRAM, and more durable than SSD, and 10x more dense than DRAM because it is built in 3 dimensions. This Anandtech article explains it really well.

For HANA, this is the most important hardware innovation since Intel Westmere. It’s almost as if Intel designed 3DXP for SAP HANA! Have a think about the following:

1) Eliminating Hard Disks

Hard disks are a pain in HANA because they often require Enterprise Storage for what’s basically just a backup. 3DXP will be more expensive than SSD but because it doesn’t require expensive interconnects, the total system cost should be lower.

There’s some development work to do for SAP because clustered systems will need to have a way to share storage, and 3DXP will be installed inside the HANA system (remember that HANA is a shared-nothing architecture). Ideally, SAP needs to invest in a clustered filesystem to solve this problem.

2) Adding a Warm Data Temperature

SAP S/4HANA systems now have a capability called Data Aging, which pushes older data onto hard disk. This is typically very useful for accounting data (for example) which is required for 7-10 years for compliance purposes, but data outside of this year and last year is not commonly used.

This is enabled by core SAP HANA database functionality introduced in HANA SPS09 to allow the single management of in-memory and disk-based databases as a single database. I expect this functionality to be extended out into the warm tier.

In a S/4HANA system if you have 10 years of accounting data, and it’s 5TB, then 1TB of it could be considered “Hot” and 4TB could be considered “Warm”. Data older than the regulatory period could be considered “Cold” and put onto a much cheaper archive storage. Currently tools like OpenText rule the archive market, and I believe in the future Hadoop will be a much more cost-effective mechanism, and it seems likely SAP will build Hadoop archiving into S/4HANA.

And remember – this could all be done dynamically based on how the system is used. That brings significant benefits over the existing archiving mechanisms in ERP.

3) Speeding Start Times

One of the challenges with large HANA systems is that the start time can be slow. 15 mins per TB is typical, so a large system can take 1-2h to be fully available. This is solved by having a HA cluster of two systems, so you can do rolling upgrades and keep one up at all times.

With 3DXP, we can expect to load data from memory into disk in times measured in seconds. The specific performance envelope has not been made public, but we expect the performance envelope to fit between RAM and SSD.

Final Words

I can’t speak on behalf of SAP, but assuming they build this in, it will take SAP some time to build 3DXP support into HANA and S/4, just as it will take Intel time to mass produce the 3D wafers in enough volume. HANA development time is in 6-month cycles so I’m hoping we see initial support in 2016 with SPS14 of SAP HANA.

I don’t say this often, but this is definitely revolutionary to SAP HANA. G… C… some might say!

VN:F [1.9.22_1171]
Average User Rating
Rating: 5.0/5 (11 votes cast)
The impact of Intel 3D X-Point to SAP HANA, 5.0 out of 5 based on 11 ratings

4443 Views