In my previous blog, I explained that SAP HANA brings a new approach to data management, the in-memory first approach. I have also mentioned that several customers are already experiencing the performance gains and simplification with SAP HANA and Forrester Research has written about savings companies can achieve with it. For a detailed understanding on how SAP HANA can be applied to your business, this interactive document has many resources to help you determine the SAP HANA fit for your company.
Back to SAP HANA’s data management approach… with SAP HANA, all data is in a columnar format and in memory by default. This single data copy is used for both transactions and analytics. To be clear, saying that all data is in memory does not mean that you can’t have some of the warm or cold data moved to disk or remote system and still access it when you need. Dynamic tiering option in SPS09, near-line storage (NLS) option and smart data access will allow that.
With the clarification out of the way, let me share my next 5 questions. As I did in my previous blog, I am answering them for SAP HANA.
6. With DRAM prices plunging, should businesses continue to use legacy disk-based databases?
You don’t have to. SAP HANA is an ANSI SQL-compliant, in-memory platform designed to take advantage of the latest hardware and in-memory technology innovations. SAP HANA does use disk to persist data so that in case of power failure or disaster data can be restored, but think of disk for HANA as a new form of tape backup system.
7. Do SAP Applications run better in a true in-memory solution?
Yes, most of the SAP applications are already optimized to run better on SAP HANA because their business logic now runs inside SAP HANA. SAP is working on optimizing remaining applications which are not yet optimized to run on SAP HANA. Reports and business logic that depends on aggregates don’t need to rely on materialized views. Instead, they can aggregate up-to-date information, on the fly. Think about that for just a second; no more pre-aggregates, no more materialized views, no more special indexes to get performance gains with SAP HANA.
8. Is my IT landscape simpler and less costly in the long run with a true in-memory solution?
Yes, with SAP HANA’s modern design, you can run more business logic close to the data, run transactions and analytics on the same database instance, and even avoid using an application server in some cases. SAP HANA is an integrated platform that processes almost all data types, including structured, unstructured, text, text in binary files, streaming data and spatial data. Additionally, with columnar tables, advanced compression, no materialized views, and the ability to handle different workloads on single copy of data, more data can be efficiently processed in memory.
9. If I want to run transactions and analytics on the same system, do I need more DRAM and CPU resources?
SAP HANA minimizes the use of DRAM maintaining single data copy for transactional and analytical workloads and using advanced data compression to store data in less space than its actual size. As far as CPU is concerned, SAP HANA often operates on compressed data, avoiding unnecessary compression-decompression operations. Additionally it doesn’t need CPU cycles for synchronizing multiple data copies, converting between rows and columnar formats or redirecting workloads between disk and in memory stores.
10. Who has the leading true in-memory database?
SAP HANA is a true in-memory database with a proven track record at thousands of customers. Several customers are live with SAP Business Suite, one of the most demanding and mission critical enterprise transactional applications, on SAP HANA. Additionally, many ISVs and hundreds of start-ups are developing application using SAP HANA.
These questions complete the list of my top 10 questions and answers from SAP HANA side.
What questions do you have? Let me know, and I will try to answer them in future blogs.
Learn more about SAP HANA.
VN:F [1.9.22_1171]How to compare modern and traditional approaches to In-memory Data Management (Part two of two),