Sweet, Simple . . . and Explosive?

Posted by Swen Conrad on August 13, 2014

More by this author

Suite on HANA, Simple Finance, and a really cool explosion video as a bonus

Ever since returning from SAP SAPPHIRE in June, I wanted to write this blog. Reason being is that I got so excited to see the official announcement of both SAP Simple Finance and the HP Converged System 900 for SAP HANA pretty much in the same keynote. And why the excitement? Because it brings together

  • a tremendous additional business value proposition for Suite on HANA and
  • the largest scale-up HANA appliance with up to 12TB of RAM for truly mission-critical deployments like ERP and CRM.

And this, in my humble opinion, changes the cost-benefit ratio for migrating a customer’s SAP ERP system to HANA so drastically that we will see increasing HANA adoption in this key HANA segment. But what makes me so certain about this? Read on to find out!

No more aggravating aggregates

While SAP HANA has improved the traditional SAP Business Suite applications in many areas like Materials Requirements Planning (MRP) and Operational Reporting, improvements so far have been based on targeted optimizations of existing SAP programs for SAP HANA. With Simple Finance, SAP has for the first time fully rewritten and refactored a major Suite component in its entirety and has been able to take full advantage of the power of SAP HANA.

The original trick for Simple Finance was to remove aggregates and the materialized tables they are stored in. Pre-HANA, they were critical to increase system response time – but had the side effect of increasing system management overhead while decreasing business agility. With SAP HANA, these tricks are no longer needed since reporting against the line-item data is extremely responsive and the additional load is highly affordable.

Hasso Plattner explains this nicely in his recent blog, “The Impact of Aggregates,” which make for some interesting reading. He also touches on a situation that I have seen in the past: These oh so rare changes in organizational structures, aka YAR (Yet Another Reorg). So what happens in a traditional SAP Finance system in a management-driven reorganization? Here’s a sampling:

Reorgs entail changes in reporting relationships as well as product ownership. And while reporting relationships are simple lines in PPTs, they have very complex permutations when you look at compensation, bonus pools and cost allocations for overhead. It gets even more complex looking at the product/services side and accounting for cost of goods sold, sales and revenue, all represented in cost centers, profit centers or regular accounts as part of your company ledger and/or sub ledger. For the finance department to manage and report on all these data points in a responsive manner, aggregates for revenue-per-product-line or compensation-per-department, for example, were stored in materialized tables in traditional finance applications before HANA.

Now, when a reorg happens, several or all of these assignments of people, material cost, and revenue to cost/profit centers are changing. To complicate the matter, the active date of the changes may be in the past or future and/or the business would like to simulate the proposed changes before committing to them. And while SAP has built a process called Profit Center Reorganization to facilitate such reorgs, a quick look at this help page describing the necessary steps will give you an appreciation of the limitations and the prohibitive complexity of such a reorg.

Per Hasso Plattner, “the simplification is huge and makes the whole accounting system so much more flexible.” Having been part of re-org projects in the past, I could not agree more.

In short, SAP Simple Finance is a true game-changer for SAP customers and will make the business benefits of HANA highly tangible for business stakeholders. But how about the other two discussion points in every Suite on HANA conversation:

  • Sheer size of the ERP database, and
  • The mission critical nature of ERP systems requiring 24×7 availability?

It’s true, bigger is better!

Most SAP customers have started with moderate database sizes when first implementing SAP ERP. Over the years, with growing worldwide and functional footprint, acquisitions, plus years of historic data, ERP database sizes have grown into the double digit TB range. At Hewlett-Packard for example, one of the core SAP ERP production systems has the massive volume of 40TB.

While SAP HANA is able to scale-out linearly for analytic use cases and it is well understood how to build large landscapes based on smaller interconnected nodes, SAP does not yet support this for Business Suite on HANA. That means that the only possible and supported way to implement Suite on HANA is by leveraging a single scale-up system configuration. Let’s consider for a minute the HP example with 40TB of uncompressed ERP data. It is obvious, that this single system requires a massive amount of RAM, so let’s do some math:

HP’s ERP system size

40TB

Typical HANA compression ratio for ERP

5:1

Compressed HP ERP data volume

8TB

HANA Appliance RAM allocated for ERP data versus working RAM

50-50

Theoretical size of HP’s ERP on HANA system

16TB

As the calculation shows, HP IT will theoretically require a single HANA appliance with 16TB of RAM! Since such a large HANA appliance does not exist, the HP IT team is well underway with system archiving activities to tame the database volume and make a transition easier. So how much data will they have to shave off to fit into a single SAP HANA appliance?

SAP and HP co-innovation: the out-sized scale-up businesses need

In early 2013, SAP challenged HP to build the largest available scale-up HANA appliance for mission critical customer environments. HP’s mission critical server team took the challenge and the two companies teamed up for deep co-innovation. The result: In June 2014, after diligent joint engineering and testing, HP announced HP ConvergedSystem 900 for SAP HANA (CS900) during Bernd Leukert’s SAPPHIRE keynote.

The breakthrough CS900 HANA appliance, based on the latest Intel architecture with 16 XEON E7 (Ivy Bridge-EX) CPUs, is the only system in the market that provides 12TB of RAM. This compares to only 6TB of RAM by the next largest competitor system. And best of all, CS900 solves HP IT’s challenge of migrating their 40TB system into SAP HANA (after a little house-keeping as mentioned earlier).

While I am discussing the 12TB scale-up solution in this blog, the picture below shows other available configurations as well as the key common components. Please note that this is only the beginning of the product roadmap. There are more versions of CS900 to come.

CS900 family picture.jpg

Picture: System family for HP CS900 for SAP HANA

For when failure is not an option

Compare a company’s ERP system to the cardiovascular and nervous system of the human body. Failure of these systems leads to life-threatening situations frequently. So can an extended ERP system outage for a corporation. Therefore, any company considering moving their ERP system to a new platform has to be careful to ensure the new platform provides high availability and disaster recovery. To put it in other words: business continuity! So, how does HP CS900 stack up to this requirement?

The 12TB scale-up solution is built on Superdome 2 and ProLiant technologies which are based on legendary Tandem NonStop fault tolerant systems, originally built for banks and stock exchanges. The high fault tolerance of Tandem systems in the past as well as the HP CS900 for SAP HANA now, are based on high levels of RAS (Reliability, Availability and Serviceability) protecting data integrity and allowing for long periods of un-interrupted system uptime without failure. These high RAS levels are achieved by built-in system redundancies and complex system self-check and self-healing mechanisms – and are key for minimizing risk of system failure for a mission critical SAP ERP system. But unfortunately, even the best systems can fail due to individual components or outside forces. What then?

But what happens when your datacenter explodes?

SAP HANA has come a long way and since HANA SPS6 provides all required high availability and disaster recovery (HA/DR) capabilities for failing over single nodes or entire systems either locally or to secondary data centers. However, these failover scenarios require a certain level of human intervention when relying on native SAP HANA capabilities alone. I think this is the reason why expert opinions are split on the topic of HANA’s data center readiness. Here is my take:

SAP offers mature and flexible data replication technologies, however, failure detection and automatic resolution is available via complementary third party solutions from all major system vendors for their respective HANA offerings. Nevertheless, the capabilities and maturity of these offerings differ and the broad majority of vendors only offer storage based replication. The only two system based replication vendors, complementing and working on top of the above mentioned SAP replication technologies, are HP ServiceGuard and SUSE Cluster (in beta). They work directly on the in-memory layer within SAP HANA.

A full comparison between both approaches is beyond the scope of this blog but I hope I brought across the point that there is a large variety of HA/DR solutions available in the market, and a mission critical Business Suite on HANA can be operated in a highly risk-mitigated fashion. Additionally, the following table shows a high level comparison between the two major approaches for HA/DR.

Storage Replication

System Replication

Vendors

HP, IBM, Hitachi, CISCO, Dell, Fujitsu, NEC, VCE, Huawei, Lenovo (China only)

– HP ServiceGuard

– SUSE Linux Cluster (in beta)

Supported HANA use cases

– Scale-up

– Scale-out

– Scale-up

– Scale-out (only HP)

Replication strategies

– Synchronous

– Asynchronous

– Synchronous

– Asynchronous

Bandwidth requirements

Higher

– Replication of partial transactions results in costly roll-backs when transaction is cancelled, e.g. due to failure

Lower

– No transmission of cancelled transactions; replication only after full commit

– Sync mode: only log files are transferred continuously with transaction commit, rest async driving lower bandwidth

Disaster recovery (*)

– Performance optimized:

– Cost optimized:

– Slow

– Slow

– Fast

– Medium

Openness

Hardware vendor dependent

Infrastructure agnostic

Additional capabilities

n/a

– Zero downtime management (aka NetWeaver connectivity suspend)

– Cascading multi-tier system replication (only HP)

Key roadmap capabilities

n/a

Active/Active Operation (read only reporting on secondary fail over site)

(*)

Performance optimized

– Secondary system completely used for the preparation of a possible take-over

– Resources used for data pre-load on secondary

– Take-overs and Performance Ramp shortened maximally

Cost optimized

– Operating non-prod systems on secondary

– Resources freed (no data pre-load) to be offered to one or more non-prod installations

– During take-over the non-prod operation has to be ended

– Take-over performance similar to cold start-up

A picture (or video) is worth a thousand words

The question remains, how do these large scale mission critical systems behave in a live disaster recovery situation? Let’s have a look at this video showing exactly such a fail over process from a primary to a secondary data center. Arguably, the circumstances are manmade but well, close enough to a real disaster situation in my opinion. But see for yourself please.

/wp-content/uploads/2014/08/explosion.jpg

Video: HP Disaster Proof Solutions

Closure

Cool stuff, no?

I hope you can see the business potential of SAP Business Suite powered by SAP HANA with solutions like SAP Simple Finance – as well as a very feasible, low-risk path toward implementing this solution within your company. I personally believe that this solution is game-changing. Not only does it improve existing company processes like finance or MRP greatly, it also offers vast opportunities for moving your company to an entirely new level. Here is one final example:

Also in June at SAPPHIRE, I met a mid-size German producer of industrial equipment (from the so called “Mittelstand”) which is migrating all its business systems to SAP HANA. This is all part of the company’s goal to transform itself from a product-centric company to a services-centric company – one that leases its products to customers and complements them with rich services focusing on predictive maintenance. Think the “Internet-of-Things” (IoT).

With SAP HANA and Business Suite on HANA on a platform for mission critical operations, every SAP customer has the opportunity to up his game, increase business, and IT efficiency and expand into entirely business and solution areas. And this is great!

Thanks for reading and I am looking forward to your comments. – Please also have a look at this technical brief for HP ConvergedSystem 900 for SAP HANA for finding out more.

Swen

PS I thought this SPECjbb2013-MultiJVM benchmark may also be worth a look …

PPS A word about myself: I have been part of the SAP HANA journey from the early days as an employee of SAP Labs in Palo Alto. I recently (re-) joined Hewlett-Packard as the SAP Chief Technologist within HP’s SAP Alliance organization and am glad that I am still working on this exciting topic

.

VN:F [1.9.22_1171]
Average User Rating
Rating: 4.6/5 (8 votes cast)
Sweet, Simple . . . and Explosive?, 4.6 out of 5 based on 8 ratings

8954 Views