How New Non-Volatile Hardware Technology Revolutionizes In-Memory Computing

Daniel Schneiss

Posted by Daniel Schneiss on

SVP, SAP HANA Platform & Databases

More by this author

Digital transformation is occurring whether organizations are ready for it or not. According to Forrester’s Unleash Your Digital Predator, 89% of executives believe digital will disrupt their business in the next 12 months.

Not surprisingly, Gartner also found that 70% of executives believe that IT investment can impact their company’s ability to embrace digital transformation and spur innovation, both on the software and the hardware end.

On the software side of things, SAP HANA 2 allows powerful analytical processing so organizations can build insight-driven applications to stay ahead of the competition.

But how does SAP HANA 2 support them in embracing the hardware technology as well?

Handling More Data in Memory for Even Greater Insight
Acting in real-time requires a modern data platform that can process increasingly large, complex volumes of data, both transactional and analytical, in memory to deliver insights and results the moment you need them.

Customers who are innovating with SAP HANA already deal with large amounts of data, up to 50 TB, they are processing in memory – which compares with app. 500 TB in a traditional database. This is only possible due to superior compression in SAP HANA. And data growth is not slowing down – as data grows, so does the need for our customers to create business value from even larger amounts of data in memory.

Once again, SAP HANA is at the forefront and pioneers by adopting the latest hardware technologies of Non-Volatile RAM (NVRAM) to evolve in-memory computing even further.

How SAP HANA uses NVRAM
SAP HANA will use NVRAM as an extension of classical dynamic random access memory (DRAM) by selectively shifting data structures from DRAM to NVRAM.

This enables SAP HANA to exploit the unique characteristics of both technologies. The best part is that this does not require “heart surgery” on the SAP HANA database, because the design of SAP HANA already perfectly accounts for different memory hierarchies, each of them optimized for a specific purpose with specific characteristics.

As a first major step, the highly-compressed, read-optimized part of the column store – which attributes to over 90% of all data in most SAP HANA systems – is enabled for placement in NVRAM. Having 90% of your data still in-memory, even after a database or server shutdown, means no reload from persistency and maximum performance right from the start.

As it is expected that NVRAM-DIMMs will be both larger and cheaper than DRAM-DIMMs, customers will have a higher memory capacity in total, a higher memory capacity for the same price, or the same memory capacity for a lower price, which directly affects their TCO.

SAP HANA First to Adopt Intel’s New Persistent Memory
At SAPPHIRE this year,  Intel showcased a pre-release version of the Intel Xeon Scalable Family platform with 192 gigabytes of DRAM and 1.5 terabytes of Intel’s persistent memory running on a development version of SAP HANA. The demo showcased several thousand SAP HANA user sessions performing read and write operations using both types of memory. Intel persistent memory handles large volumes of real-time operations such as inserts and updates while DRAM is used for low latency, read operations. By targeting the right operations to each memory type, the system handles larger capacity while maintaining overall performance.


Intel’s persistent memory is a perfect fit to SAP HANA’s architecture, as it allows to integrate what previously were multiple layers in a memory hierarchy into a single layer, combining memory and storage in one memory device. Thus, whole data sets can reside permanently in-memory, providing innovative analytic scenarios that haven’t been possible before.

In addition, because data remains in-memory through power cycles, restart times will be a fraction compared to the time it takes to load all data from disk. And since Intel memory isn’t burdened with the same cost structure as DRAM, IT organizations can achieve massive in-memory capacity at a lower TCO.

A Whole New Era of In-Memory Computing…
With the new Xeon Scalable processor platforms that will be launched this summer, SAP customers can benefit from a performance increase in SAP HANA of up to 59%, using standard DRAM.

Intel persistent memory will be available with a processor refresh of the Xeon Scalable platform in 2018, code-named Cascade Lake. And while some of SAP’s competitors are still playing with Optane simulators in their labs, SAP HANA is working with the leading platform providers to adopt this new technology as one of the first in the market.

The early adoption of Non-Volatile Memory within the SAP HANA database will enable customers to increase SAP HANA performance by a factor of 1,59, increase memory capacities of their servers and lower the TCO dramatically.

…Starts with SAP HANA
Truly digital innovation needs bold ideas – and the courage and drive to turn them into reality. You won’t win in today’s digital economy if you fail to execute on your ideas by just following the competition – you need to lead the pack. Once again, we are not waiting for others to guide us, but continue to stay ahead, making clear to the market that we are not only at the forefront of innovation, but also delivering and executing on our mission: offer our customers what they deserve – the very best technology and solutions available on the market.

VN:F [1.9.22_1171]
Average User Rating
Rating: 4.7/5 (13 votes cast)
How New Non-Volatile Hardware Technology Revolutionizes In-Memory Computing, 4.7 out of 5 based on 13 ratings

2878 Views