“Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.” – Steve Jobs
Computing evolution undergoes tides of complexity and simplification. In the 1980s, it was the CISC era and the grand simplification then was RISC – Reduced Instruction Set Computing. Last week UC Berkeley received an award along with Sun for initiating this wave of simplification. The simplication phase usually starts with change in programming models. It was the beginning of the commodization of the shared memory programming model. Complexity followed a wave of simplification and in that wave we saw the emergence of big SMP class systems (Sun E10K is a proxy). That complexity peak led to a tipping point forcing programming models to change or simplify. The big SMP ‘scaleup’ gave way to ‘scaleout’. The shared memory paradigm gave way to message passing. “Software will change when it has to, not when it needs to“. The direct beneficary of a simpler programming model is simpler hardware as history has shown. Fifteen years later we are at another cusp of complexity and the next wave of simplification.
At SAPPHIRE NOW 2014, we showcased what we call a HANA Cell, which is the basic building block for HANA Enterprise Cloud (HEC), the HANA in SAP’s cloud infrastructure. It was showcase of our 2nd generation cell design done by the HANA Cloud Computing group located in Palo Alto and Belfast, UK. In some sense it was a showcase of simplification.
What was unique about this demonstration is that, it was the first time one could put together a rack sized computer with mostly commodity components that is simpler, faster and cheaper than any equivalent platform to run Big Enterrpise applications. The technical details of this building block has been presented at the Flash Memory Summit 2014. A brief of key specifications is outlined here.
In very simple terms, one is now able to build a rack with 24TB of DRAM, 32 Intel CPUs with 576 cores (Haswell) out of commodity off-the-shelf boxes available HP, IBM(Lenovo), Dell, Supermicro, Quanta etc.
By way of comparison – 15 years back, large memory systems were designed ground up using proprietary components and best known system then was Sun E10K. At that time, for $5M you could get 64 processor cores, with 64 GB of DRAM. This was the best ERP machine of that era.
What can you build today? On the left is a rack of 8 4 socket servers which is available from any number of vendors all interconnected via high performance fabric (PCIe in this case) to a shared flash array with a capacity of 200TB (and upto 500TB) delivering an all solid state rack (no rotating media).
In 15 years, the CPU performance has gone up by 135x, memory bandwidth (aggregate) in the system has gone up 125x and memory capacity has gone up by 375x while the cost is 1/5th of what you paid then.
What perhaps took $200M+ to develop the world’s biggest ERP machine, now can be put together with 4 socket Intel server and for far less than $1M, for little or no R&D. What used to house an E10K in an entire rack, you can get that and more in a single 4U socket system (72 CPU cores vs 64 cores, 3TB of DRAM vs 64GB of DRAM, $5M vs $50K.
OK, that was 15 years back. What about latest and Big Enterprise Platform out there? Here’s a simple comparison between Exadata and our HANA Cell 2.0.
In a nutshell, for the same rack with similar power limits, you get a simpler, faster, cheaper and more capable platform. Simpler, because its composed of one type of building block that can be re-purposed for different uses (database, storage, application). Its faster because it has 2X the bandwidth to be able to move data. It has more capacity (6x in DRAM and 2X in flash storage) and its cheaper ($1.4M vs <$1M). In addition, the Cell approach is more capable because you have a flexible way to use the entire rack for different processing needs. No special database server or storage server. Simple software likes Simple hardware. The core architectural thesis is around a simple model of stateless compute nodes inter-connected to high performance fabric to shared persistence storage (Flash) which is backed up extremely scalable, low cost HA/backup/DR class storage (HDD).
This is the new Enterprise platform. Engineered from simple high performance components (4 socket Intel servers, interconnect that is generally available and all flash storage arrays that are becoming generally available from a number of companies).
The simplification wave changes the value chain as well. It did at the peak of the dot-com era, where Sun Microsystems was the first casuality of this shift. The cloud (web) transition enabled the OEM to ODM transition. Now we see IBM (Lenovo acquisition of IBM’s server business), and others. Where is the end? Oracle is continuing to push ‘engineered systems’ – but one has to step back and ask – what is the value of the engineered systems especially with the transition to the cloud? Software will change when it needs to and it is changing to accelerate this transition.
The first wave of simplification gave us RISC computing. The second wave of simplification gave us threading/multi-core and scale-out. We have entered the third wave of simplification and the unit of optimization is memory. That is what S/4HANA is all about.
Imagine what the next 15 years is going to look like. Certainly we are going to see gains of 10x or more in cost, 10x in performance and 10x in new capabilities i.e another 1000x. Strap yourself in for the ride.
VN:F [1.9.22_1171]Infrastructure for S/4HANA - Simpler is faster, cheaper and more capable,