While we learnt some things about Hana Enterprise Cloud (HEC) at its launch last week, and a little more this week at Sapphire - what dawned on me in the aftermath, is that elasticity is a fundamental challenge off HEC and any in memory offering.
So that's elasticity of code - but what about elasticity for data? In contrast to code, data can never be just taken down like code instances, as it will / may have to be needed later again. So ultimately all data will have to become persistent, and that means it needs to be stored on HDD at some point. Of course, like many of the major Internet properties, you could use the faster and more expansive flash storage instead of HDD. But it will equally make the data persistent.
Obviously elasticity is the key influencer of TCO for any cloud, as you can manage load to your computing resources.
Obviously SAP could create the mechanisms to make code elastic. Meaning to find a way to take code out of the Hana RAM and put it on a cheaper, persistent medium for storage, when not needed. And SAP has a lot of experience in this area, just look at the mechanisms SAP put into the memory management of its ABAP code. Gigabytes of ABAP code get moved, cached and parsed very well in the much more resource thrifty R/3 architecture. But the prize for that would be small - as memory used by code in the HEC should be dwarfed by memory used by data.
Side note: my 1st blog written in the air, 1st written on an iPad, 1st published in the air - thanks GoGo - and written about cloud - edited in the cloud (Google Drive and Blogger) - dreamt up above the clouds. Technology makes it happen.
Elasticity - revisited
Let's remind ourselves what elasticity in the cloud is all about. It's the dynamic ramp up (and ramp down) of computing resources for a cloud application. If we look at the market leader - Amazon's AWS - it's basically the process to get another AWS instance up and running based on an AMI file, that was somewhere stored, stored on a .... Hard Disk Drive (HDD). As that AMI gets loaded from hard disk to memory, capacity of the cloud application increases. When less capacity is requested, AWS will take instances down, adjusting capacity to demand. This shows elasticity ramping up and ramping down computing resources.So that's elasticity of code - but what about elasticity for data? In contrast to code, data can never be just taken down like code instances, as it will / may have to be needed later again. So ultimately all data will have to become persistent, and that means it needs to be stored on HDD at some point. Of course, like many of the major Internet properties, you could use the faster and more expansive flash storage instead of HDD. But it will equally make the data persistent.
Obviously elasticity is the key influencer of TCO for any cloud, as you can manage load to your computing resources.
Hana's raison d'être is the core of the challenge
Hana was designed to be an in memory database - and with that it's core DNA of being in memory creates also its key challenge, creating Issues with any form of elasticity. With Hana's father, Hasso Plattner, being adamant to look at other options of data storage, Hana can only keep data in memory, expensive RAM memory and with that it doesn't offer any options where else to move data.Hana can't be elastic for data - only static
From above it's clear - there is no place where to move the data from Hana. Used or not used - it stays in expensive RAM memory. Yes RAM prices are dropping, but so have Flash and HDD costs. Definitively there is more or in for prices decreases on the RAM side, but we will have to wait to see how fast they materialize.Obviously SAP could create the mechanisms to make code elastic. Meaning to find a way to take code out of the Hana RAM and put it on a cheaper, persistent medium for storage, when not needed. And SAP has a lot of experience in this area, just look at the mechanisms SAP put into the memory management of its ABAP code. Gigabytes of ABAP code get moved, cached and parsed very well in the much more resource thrifty R/3 architecture. But the prize for that would be small - as memory used by code in the HEC should be dwarfed by memory used by data.
Take a page from Teradata?
Maybe the Hana architects should look at the recent Terradata announcements of adding another category to their hot / cold storage algorithms for data... Which only puts the hottest data in memory, and the coldest on HDD, the remainder distributed between differently fast and expensive storage mediums. SAP could put in place a similar algorithm - are there actually faster and slower parts of RAM? Like writing data on the outside vs inside of a HDD.Hana needs a bigger L1
Sometimes SAP also touts their collaboration with their hardware partners around the improvements needed for in memory architectures. And you could make the case, that the speed with which RAM chips can feed data closer to the processor, the L1, L2 etc chips will make a difference. Or a much larger L1 cache. But that would slow down the processor as you want to feed the right bite sized portions to the processor. So maybe a bigger L3? We will see what SAP and partners will come up with - but it will certainly not be too much low hanging fruit.MyPOV
While Hana can certainly make the claim for speed, it also requires storing data in the most expensive and most limited storage medium we have today. RAM. Since SAP cannot make the data storage side elastic easily, with some of the motivation arbitrary though, it will make Hana solutions pricey. If SAP would consider strategies to make the data storage more elastic, it should help Hana, as it will make more cost benefit assessments of in memory applications favorable for Hana.Side note: my 1st blog written in the air, 1st written on an iPad, 1st published in the air - thanks GoGo - and written about cloud - edited in the cloud (Google Drive and Blogger) - dreamt up above the clouds. Technology makes it happen.