- RSS Channel Showcase 2422130
- RSS Channel Showcase 9934145
- RSS Channel Showcase 8706090
- RSS Channel Showcase 5752652
Articles on this Page
- 11/08/13--15:46: _Tweeting and feelin...
- 11/13/13--00:24: _Kronos executes – 2...
- 11/13/13--12:07: _Informatica pushes ...
- 11/14/13--07:22: _AWS powers on, into...
- 11/15/13--12:24: _AWS moves the yards...
- 11/18/13--12:34: _Platform ecosystems...
- 11/21/13--00:21: _First takes from IB...
- 11/22/13--10:04: _Dreamforce 2013 Pla...
- 12/02/13--21:54: _A tale of two cloud...
- 12/11/13--01:31: _SAP get serious on ...
- 01/09/14--17:43: _Is 2014 going to se...
- 01/17/14--08:37: _IBM kicks off the c...
- 01/23/14--20:15: _Workday Update 21 -...
- 01/29/14--13:33: _First take - IBM ma...
- 02/10/14--16:50: _Oracle pushes moder...
- 02/27/14--00:27: _First take - Why Wo...
- 02/27/14--20:12: _Another week and IB...
- 03/05/14--13:51: _News Analysis – SAP...
- 03/09/14--23:31: _Progress Report - G...
- 03/10/14--16:58: _News Analysis - SAP...
- There is always a trade-off between paying for social media attention and not over commercializing. The charity aspect takes away that conflict.
- I am not a social media marketing expert - but I am guessing the never old chase of attractive return on marketing $s spent - is pretty good on these 10k US$. Advertise the hashtag and promote Kronos on Twitter may work, too - but my gut feel is, that it may have a worse return of marketing dollar spent.
- Kronos could increase the attention of this (or maybe should) - by updating regularly on how much was already donated at point x of the conference. And then - assuming success - increase the total donation limit.
- Lastly a vendor could get the twittersphere even more involved by asking to help find out which of e.g. three charities should receive the donation. So move from tweet to donate to tweet to where to donate - more involvement - more attention more social media buzz - that's what you want for a conference.
- 11/13/13--00:24: Kronos executes – 2014 will be key
- Kronos cloud efforts are runningat full steam with now 9000 organizations using the Kronos Cloud. It’s not clear what percentage of the Kronos customer base that is – but certainly there is interest in the customer base to move to cloud. The most mentioned drivers by both customers and prospects were the lower deployment costs and the ability to move update and upgrade burdens to Kronos. Unfortunately Constellation Research did not have the time to dig in more in detail into the Kronos cloud infrastructure – so we can’t cast a vote at this time how much this maybe a hosted vs. a true cloud offering. Regardless of this - customers seem to content when using the cloud products and interested to even eager to move to them
- Kronos keeps investing into key mobilefunctionality – as more and more workforce management activity is not recorded on a desktop but on a mobile device. The added offline capability is a welcome feature, as wireless network coverage is still not ubiquitous and we wish more HCM vendors would take notice and provide likewise offline capabilities for their mobile offering. Later this year geo-fencing capability is coming - a key convenience and automation feature for workforce management software. And finally the demos all showed neat annotation capabilities where a user can mark critical data and then share the annotations and / or screenshots with other users. With availability for iOS, Android and Blackberry Kronos covers the key platfor
Socialis a trend not excluding workforce management, so Kronos (wisely we think) decided not to build but to partner – in this case with Tibco’s tibbr product, to bring better social practices and collaboration to its user base. The company even dabbles into gamification – though only through allowing managers to post statistics into an activity steam – a good starting point.
In our view overall a smart move as any shift planner knows that every day work is a communication and social challenge. The product will ship later in the year and we look forward to see its first incarnation.
- Equally like social – there is no enterprise software user conference in 2013 that does not touch analytics. Kronos is partnering (wisely again we think) with Microstrategy and delivers a number of powerful data visualization and exploration capabilities around workforce management data. Unfortunately - like many other vendors – Kronos is using the fauxanalytics term – describing a nicer version of dashboarding and not enabling true analytics that take action or at least suggest an action. But what isn’t can still come true later.
- And let’s not forget the Kronos is also a hardware company, buildingand selling the inTouch devices – that keep getting upgraded regularly. The ability to clock in without a badge, thanks to a high quality optical sensor, will certainly be an efficiency booster for this scenario. And of course it supports touch and plugs well in across connectivity options and firewalls.
- 11/13/13--12:07: Informatica pushes the cloud integration stakes
- Unifying Cloud data, service, and process integration– New service and process integration functionality enables users to build standards-based, advanced real-time integrations in the cloud across application services, processes and systems. The model- driven design approach enables LOB process analysts and integration developers to be highly productive developing service orchestration and process automation applications with improved collaboration and without requiring specialized development skills.Integrations developed on Informatica Cloud can be exposed as REST APIs to enable seamless access to processes and data to any consumer application, including mobile applications, web mashups and business partners.
- Crowdsourcing integration processes– New Global Repository expedites sharing of reusable cloud integration processes within the enterprise or across enterprises to increase development productivity, foster collaboration on complex business processes and accelerate time-to-develop and deploy integration projects.
- Seamlessly integrating interdepartmental processes– New Mid-Stream Writer empowers users to create advanced data flows enabling business processes that span departments and their individual cloud applications (e.g., Salesforce CRM, Marketo, Workday, etc.) for increased business productivity.
- Extending SalesforceOutbound Message Supportwith Custom IntegrationTemplates to easily implement re-usable integrations triggered by events in Salesforce.
- Increased mobility– The Cloud Extend iPhone mobile app, previously available as a preview release, is now generally available. A new Cloud Extend Android mobile app is now available as a preview version.
- Simplified guide creation– Users can now easily create embedded sub-guides from within a parent guide that will use them, as well as quickly view and navigate to all guides that embed a particular sub-guide.
- Skip-able Automated Steps– Users are now able to use a “skip button” to sidestep any automated process steps (e.g., “Do you want to create a contact?”) not specifically required for the task at hand.
- Enabling developers to quickly select and mask related Salesforce objects and move them into the Salesforce sandbox for testing.
- 11/14/13--07:22: AWS powers on, into new markets - Day 1 reinvent takeaways
- Use the cloud for development and test environments - examples were Lionsgate (SAP), Tokyo (Oracle) and Galata Chemicals (SAP again)
- Use the cloud to build brand new Apps
- Use the cloud to make existing on premise apps better (e.g. use Redhift for analysis)
- Use the cloud to tie new cloud Apps back to existing on premise Apps
- Use the cloud as destination where you applications to opportunistically - examples were Unilever (web sites) and Bristol-Myers Squibb (Simulations)
- Use the cloud to move all on premise Apps to it, to go all in as AWS calls it (examples were Netflix and DowJones)
- 11/15/13--12:24: AWS moves the yardstick - Day 2 reinvent takeaways
- Security (interesting that was #1 for Jassi yesterday)
- Cost (noone has ever said I wish AWS would be a little more expensive)
- Force.com is a vehicle to be acquired by larger vendors or Salesforce itself. - This would imply no lasting stay on the platform
- Or Force.com is a vehicle to get easier funding fromt he VC community. - This maybe risky as funding may not be as abundant and not available.
- Or Force.com is a great platform to go public - see Veevo. - This maybe risky as one cannot predict the IPO market.
- Salesforce remains a key application vendor by itself, who also has an investment in many players like in this case interestingly in FinancialForce. So Salesforce can play many roles in the ecosystem - but needs to remain above the distrust line.
- 11/21/13--00:21: First takes from IBM's Software Group's Analyst Insights
- 12/02/13--21:54: A tale of two cloud GAs - Google & HP
- Prices keep falling - Not surprisingly - both vendors addressed reduction in pricing - following Amazon's AWS lead. It's almost becoming a tradition that cloud announcements are coupled with price reductions, reflecting the lower cost of running hardware environments, progress in automation and economies of scale. Notable on the Google side is the change in pricing model, too - which takes away the cost for Iops - and makes the cost of persistent disks more calculable - a move many users had been asking for.
- Memory matters - It's yet another indication that it's memory hungry applications that are moving to the public cloud - see BigData - but stacks overall get more RAM intensive. So vendors react with larger memory instances - HP puts in larger standards (3XL - 5XL) and more memory (L - 2XL - 16 cores and up to 120 GB of RAM) and Google launches three new instance types with up to 16 cores and 104 GB of RAM (limited preview only though).
- Networks matter - HP makes the case pretty clearly - with claiming faster and larger uploads - you sure want to get the data in into the cloud. Google touches networks implicitly by taking Iops out of the pricing scheme - see before. Both moves need investments in the network infrastructure that both companies are obviously undertaking.
- Disk becomes the migration tool - And despite all the push for in memory - its the good old spinning rust that makes the cloud go round. You can now keep Google Persistent Disk volumes around even through planned maintenance and volumes can be mounted and unmounted from one VM to another. On the HP side Block Storage now gets SLA service coverage with the move to GA, and storage containers can be synched across regions.
- 12/11/13--01:31: SAP get serious on open source and courts developers - about time
- 01/17/14--08:37: IBM kicks off the cloud data center race for 2014
- Data privacy compliance - Many countries have strict requirements in regards of and where local data can be stored. The industry and customers have been largely ignoring this and been looking the other way. The conversation with prospects and customers to get cloud loads gets significantly easier when you have an in country data center.
- NSA / PRISM fall out counter - The recent NSA / PRISM scandal gives a number of new and extra concerns to enterprises about naively using the cloud. Local and national data centers, under local jurisdiction, help to address this concern.
- Performance gains - Quite rightfully many users in the world complain that a number of cloud apps work best in Silicon Valley, work ok in the US - but try them in Cape Town or Melbourne. As much as vendors want it - the internet is not a consistently performing system and placing data centers closer to users benefits their cloud application user experience.
- Reliability gains - Ever since the widely publicized lessons learnt from AWS clients not using the AWS availability zones correctly - the smarter enterprises have realized that their cloud processes and data need to run in more than one location. Additional data center locations help enterprises to design an optimized reliability strategy.
- 01/23/14--20:15: Workday Update 21 - All about the User Experience ... and some more
- Talent Acquisition - which brings together recruiting, assessment and onboarding.
- Talent Optimization - which brings together performance management, succession and compensation planning
- Social Networking - basically the capabilities of the (newly branded) Connections products
- Suite building - Not surprisingly IBM is bringing together the Kenexa products in a suite. Good for customers - but we will have to see how well integrated this new suite will be.
- New functionality - IBM announced the new Social Learning product - which basically brings social capabilities to the learning process - both on discovery, sharing, usage and creation of learning content. A proof point of the strategy to enrich Kenexa capabilities with Connections capabilities.
- Complement with Connections - As HCM systems are all about people and people need to cooperate - this can be a leg up for Kenexa - tight integration to a collaboration suite. The area to watch is, what IBM will do with customers not using IBM collaboration products.
- Complement with Watson - Watson's analytical capabilities are interesting for many of the decisions that can be automated in a modern HCM system.
- Move to SoftLayer - This is IBM's cloud strategy and it's being executed for parts of the Kenexa products in 2014.
- 02/10/14--16:50: Oracle pushes modern HR - there is more than technology
- Workforce Optimization - Oracle plans to get into the workforce management arena starting with the support of mini and macro re-organizations. Oracle HCM cloud will allow to analyze and predict worker and / or organizational challenges and opportunities, model and plan actions derived on these findings and then execute on them at a certain point in time.
- New and expanded Mobile HCM - Oracle plans to deliver a new mobile experience both for tablets and smart phones. Interestingly Oracle also offers some offline capabilities - something of immense value for real mobile work warriors.
- BI Benchmarks - Oracle plans to provide more insight around the recruiting process - leveraging it's strong market position with Taleo, which enables 11% of all US hires, add 1.2 million job openings per quarter and features profiles of 417 million job candidates.
- Work Life Apps - Oracle donated a fitbit to HCMWorld attendees and organized a step competition amongst the attendees - a great showcase of the soon to be released Work Life Apps the company is planning.
- Collaboration - Enable users to collaborate and find expertise in the context of their business process.
- Discovery - Enrich employee profiles and job descriptions with reputation and influence in social media communities.
- Referrals - Syndicate and post open job requisitions and source candidates through social networks.
- Consolidate - Bring together all social media activities for HCM.
- Innovative Recruiting - Workday will be able to serve the best external candidates into a recruiting scenario. Ideally not even to the recruiter, but to the manager opening the requisition.
- Real Analytics at work - The condensation and serving of the best candidate requires some real analytics (the one that take an action or suggest something). Identified can provide them - if the can find the way to serve the best candidates. (More on real analytics here).
- Workday BigData Analytics at work - Many observers and me have given Workday a hard time on the current BigData Analytics offering to be neither of those. If Workday does what Identified enables them to do - this would really be BigData (structured and unstructured data with the 3Vs - Identified demos claimed to run on 1B+ candidates) and Analytics (condensation, scoring).
- Attractive user interface - IBM has done a good job creating an attractive user interface to run and tie applications on BlueMix. Of course you can go back to command line, but interestingly IBM shared that developers want to be more productive, too - so we will see how much mileage the overlaying user interface will get.
- Patterns prominently loom - IBM's pattern technology, originally devised with WebSphere - is getting good usage as part of BlueMix and is probably key to make the magic in the background work. Environments get more complex by the day - so the automation tool and it's capability are key for BlueMix's success.
- A rich platform - IBM keeps exposing services and tools to BlueMix - most prominent right now are the Xtify push services. MongoDB was there and of course Cloudant (freshly announced acquisition) now, too. The extensibility here is key for IBM to augment BlueMix in the coming quarters, Watson services will be the clearly differentiating crown jewels.
- Of course, SoftLayer - Almost needless to say IBM will run BlueMix on SoftLayer technology. Probably a good move to cater to a security sensitive audience that is re-assured by SoftLayer's bare metal capabilities, extended localization and transparency down to the machine level.
CloudFoundry - IBM used to build it all themselves - now it shares a foundation with co-opetitors like HP, SAP, VMware, EMC etc - an interesting aspect - that we e.g. also see in the open cloud arena with OpenStack.
Standards based - IBM has always been a promoter of standards, but was also large enough to create standards by its sheer size. Taking outside standards and building out on them is the strategy for sure now.
Developer focused - IBM was always good at building software with the user in mind. But the developer focus is certainly new and it was impressive to see to what length IBM has gone to understand the (shifting) needs of its developer base.
Openness - Again IBM has always been a supporter of open. But it was often more on paper and marketing than in every day IT life. It's good to see the openness moving way beyond the marketing messaging and into the product DNA.
- Cloud First - This is probably the first product that IBM provides for developers - on the cloud first and for now - only. Good proof for IBM acknowledging the shift in the approach to tackle future revenue streams.
IBM customers & prospects - You should actively look at BlueMix if you have an immediate application development need. IF you are a Rational shop, JazzHub is a good way to get started but you have probably many questions for IBM. Otherwise wait how IBM will build out BlueMix over the next quarters. If you are a customer of the IBM SaaS pay attention on the roadmap and functional richness of future releases.
Prospects should compare BlueMix with the usual suspects out there - Amazon's AWS, Microsoft's Azure, Google GCE come to mind (watch for the end of March events).
Both customers and prospects should be careful not to overcommit until pricing and licensing are clear - which IBM has not published yet.
IBM partners - IBM has put the cards down for both it's PaaS and its 21st century Apps strategy - if you want to be part of it, time to start evaluating BlueMix and chart a strategy of differentiation in the partner ecosystem. And look at the patterns - this could be a key to more efficient engagements and potentially even the birth of an IP strategy for you.
Prospective partners who want to get a slice out of the IBM ecosystem have now a chance to jump in the mix - it's pretty much year 0 for the API economy. Good time to start.
IBM - IBM needs to keep adding and building more services and APIs into BlueMix. Early references, success stories will be key to show impact and get the large - and mostly conservative - IBM install base to move to BlueMix faster. Obviously, get the pricing right, easier said than done. Look into exposing more products that were features at IBMConnect as a differentiator to other cloud PaaS out there. Publish a road map both for the SoftLayer expansion to Power as well as the addition of APIs from the SaaS portfolio to BlueMix.
- IBM competitors - Take a look at the broader perspective in what IBM is trying to do. If you compete in PaaS with IBM - you need a API strategy. If you compete on IaaS - you already know what the bare metal threat is for your business, so position and strategize accordingly. If you compete on SaaS - decide what the future for your 21st century apps architecture is.
- Is it all coming together for Oracle in 2014? Read here.
- From the fences - Oracle AR Meeting takeaways - read here (this was the last analyst meeting in spring 2013)
- Takeaways from Oracle CloudWorld LA - read here (this was one of the first cloud world events overall, in January 2013)
- Oracle integrates products to create its Foundation for Cloud Applications - read here.
- Java grows up to the enterprise - read here.
- 1st take - Oracle in memory option for its database - very organic - read here.
- Oracle 12c makes the database elastic - read here.
- How the cloud can make the unlikeliest bedfellows - read here.
- Act I - Oracle and Microsoft partner for the cloud - read here.
- Act II - The cloud changes everything - Oracle and Salesforce.com - read here.
- Act III - The cloud changes everything - Oracle and Netsuite with a touch of Deloitte - read here.
- Is the existing SI business being travolged? Read here.
I will be attending Kronos' user conference next week in Orlando - and when I got the customary what to do email from the vendor - I stumbled over a section mentioning some interesting marketing approach combining twitter buzz with charity.
I asked the Twittersphere this morning if anyone has seen something similar - and till now no one has come back - so kudos goes to Kronos for championing an innovative way to get attention on Twitter and equally making people feel good about tweeting as every tweet containing the hashtag will increase the amount of the donation.
Some more thoughts on the topic
It was interesting to attend the Kronos user conference, Kronos World, earlier this week in Orlando. Kronos is a key player in the HCM market, being the overall leader for workforce management. Quantitative specialist analyst firm IHS estimatesthat Kronos has a worldwide market share of 25% and a North American market share of 37%. And Kronos grows twice as fast as the market with 10% in FY 2013.
Addressing Quality Issues
Cloud Integration Innovations: Driving Productivity in Real TimeInformatica Cloud Winter 2014 introduces new advances in real-time data and applications integration, while also placing more productivity-driving power in the hands of line-of-business users. These advances include:
MyPOV – With a lot of cloud purchasing power residing in the LOB it is a good move by Informatica to support a closer to the LOB enabled integration.
MyPOV – There is a lot of dynamic and even more hype around crowdsourcing. But at some point this trend will take off and be relevant. Most likely more with business users than with technical users – so it’s a good step for Informatica to enable repository based re-use. And it’s easy – it’s all in the cloud already…. And it’s a good monetization option for customers and partners alike. We are waiting to see LOB execs haggling and trading their Informatica integration assets at a user conference. Seems far out – but we think we may see this come sooner than most vendors think.
MyPOV – The capability to influence and transform data in the middle of a integration flow is very powerful. We need to learn more on the specific capabilities but it’s certainly a powerful and elegant feature for the Informatica Cloud.
MyPOV – A key feature to understand better what is happening inside of Salesforce. We will have to see what Informatica will be able to offer beyond messaging in upcoming releases.
New Cloud Automation Enhancements: Increasing Mobility and User Productivity
Natively integrated with Salesforce, Informatica Cloud Extendenables business users to create and deploy custom guides that automate Salesforce CRM processes and ensure that best practices are followed. Informatica Cloud Winter 2014 introduces new productivity features to Cloud Extend, including:
MyPOV – The appetite to consume all things on mobile devices does not stop for integration data, process and status. A good move by Informatica.
MyPOV – Giving users more pre-configured options to control integration flows makes Informatica Cloud more attractive as the LOB tool of choice when it comes to integration.
Cloud TDM Enhancements: Safeguarding Sensitive Data in Cloud Application Sandboxes
Earlier this year, Informatica introduced a unique Informatica Cloud Data Masking service that reduces the risk of data breaches during application development and testing. Informatica Cloud Winter 2014 delivers enhancements in this increasingly important arena to help make development on the Force.com platform even more productive, including:
MyPOV – Kudos to Informatica for making a key cloud problem easier. Masking of data for test and feasibility reasons is a painful and burdensome process that will certainly be welcomed by organizations seeking cloud projects.
Cloud Data Quality Services: Control the Accuracy, Consistency and Completeness of DataInformatica Cloud Data Quality Services enable organizations to take control of the accuracy, consistency and completeness of their data-driven operations with a low-cost, per-transaction subscription. These SOAP-based web services are easily consumed by any analytical or operational applications, or used as discreet data quality rules within any business process. Also pre-defined data quality rules standardize, cleanse, validate and enrich data for all industries. In addition, specific data quality rules for the healthcare and financial services verticals are also included in this release.
MyPOV – For Informatica to expand on their existing MDM and data quality tools is a good move. The absence of ‘user’ in the above press release paragraph maybe an omission – or otherwise hinting for a more complex skill set to be required to leverage these services. More details will be needed.
AWS has come a long way
Jassy - in our view rightfully - can be proud about the breadth and depth of the offering.
And AWS always likes to talk about how much capacity it adds on a daily basis and what size of Amazon business that would support. And that KPI has moved from an Amazon 5.2B business to an Amazon 7B business in 2013, a more than 30% increase year over year.
Customer growth is equally impressive and Jassy not surprisingly pointed out the progress in the public sector space. AWS never gets tired to stress that it cares for security and given the enhanced security requirements of the public sector that AWS fulfills - all customers benefit from the progress made there. More on that below.
Our largest concern in this early stage is, that logs will only be available around 15 minutes after the event happening. That is certainly fine for most events - but for some critical ones a lot of damage and harm can be done during that time. We think AWS has and will address this - maybe sooner than later.
Philosophy Part 1 - The AWS pricing flywheel
Philosophy Part 2 - Cloud means everybody can innovate
All that of course gets more relevant with more disruptions happening to traditional businesses from cloud businesses - `the examples being (of course all AWS customers) Airbnb, Spotifiy, Dropbox, Instagram and Uber).
Next up was Jeff Smith, the CEO of Australian finance giant Suncorp, giving an example for such an approach.
Philosophy Part 3 - Cloud BenefitsNot sure why - but Jassy felt he needed to remind the audience of the basic cloud benefits. And that these - not surprisingly - cannot be leveraged in a private cloud.
But then Jassy acknowledged, that enterprises will not move overnight from on premise to the cloud. The world isn't black and white - but grey - and AWS helps customers to bridge the gap between on premise and the AWS public cloud.
Where AWS differentiates themselves from the competition (AWS calls them somewhat cute the old guard) is that AWS sees the on premise part ultimately close to 10% of overall load. The old guard in contrast sees public cloud only as 25% of the future load in an enterprise.
It's a nice way to differentiate - but we think ultimately customers will do what it's best - so probably the old guard pacman is a temporary state of the overall cloud vs on premise load balancing.
And now something completely different - VDINot a small surprise that AWS now wants to be in the virtual desktop business. Which is a business that has never really taken off in our view - despite a lot of promise. But maybe 2013 is the fall of VDI afterall with VMWare acquiring Desktone and now AWS launching its offering.
Not surprisingly Amazon right out of the gate says, that it will take it's stab from another direction, from the public cloud. So all the cloud benefits of being fully managed, being a pay as you go service, no CAPEX required, are given. And AWS can provide the license or it can be BYOL - which is a good start. Moreover the integration with Microsoft Active Directory makes it easy to deploy as users can keep their credentials. And AWS WorkSpaces supports PCs and Macs as well as iOS and Android tablets. Lastly AWS claims that by avoiding usual on premise cost such as sever hardware, storage and administration - it can provide VDI at half the cost than in premise VDI can be deployed.
Finally Jassy also unveiled the pricing - depending on hardware chosen and need for software - a VDI user will cost between $35 and $75 a month. A good price point, that certainly isn't - as we know AWS - its last one.
We had the chance to speak with executives from VMware in the aftermath of the keynote and they said that they can be equally - if not more - attractive on price - and with Desktone have a mature product and significant more functionality than AWS WorkSpaces. Certainly a valid point. But never underestimate AWS as a competitor. We asked from some reaction by the other incumbent, Citrix - but haven't heard back yet.
Cloud strategies as seen by AWSNext Jassy went over six different cloud deployment scenarios - as AWS sees them. The six are as folllows...
More Mobile with Amazon AppStreamAWS has taken care of the mobile market for a while, with offering out of the box mobile capabilities for developers who build mobile applications. And now AWS wants to help with tackling a key problem - the rendering of highly interactive and compute intensive applications across devices, as seen in gaming, engineering, events etc. with a new service called Amazon Appstream.
Amazon AppStream is a flexible, low-latency service that lets customers stream resource intensive applications and games from the cloud. It deploys and renders applications on AWS infrastructure and streams the output to mass-market devices, such as personal computers, tablets, and mobile phones. Because the application is running in the cloud, it can scale to handle vast computational and storage needs, regardless of the devices customers are using and customers can choose to stream either all or parts of your application from the cloud.
The good news is, that AWS does have the computing resources in the cloud to make this happen - and then the connections to the endpoints where these apps will be consumed. So it was an almost natural step for AWS to provide the streaming support of resource intensive apps.
Interesting enough, AWS developed its own protocol for the streaming with the Amazon AppStream STX protocol - not sure if this was necessary - look forward to learn more about it from AWS.
[Update from AWS on November 15th 2013 - more about the protocol inserted:]
The Amazon AppStream STX Protocol is a proprietary protocol used to stream high quality application video over varying network conditions. It streams video and audio encoded using standard H.264 encoding over the UDP transport layer. The protocol also captures user input and sends it over TCP back to the application being streamed from the cloud. Network conditions are constantly measured during this process and information is sent back to the encoder on the server. The server dynamically responds by altering the video and audio encoding in real-time to produce a high quality stream for a wide variety of applications and network conditions.Well there we have it - not sure still why AWS had to build its own protocol - but at the end of the day it does not matter - as AWS is large enough to set their own standards.
Unlike other remote-access services which provide access to the operating system, an end user uses your client software to connect directly to the AppStream application. This cuts latency and improves performance.
[Back with original post.]
And finally Keven Baillie of Atomic Fiction showed how AWS can help create CGI animations - a perfect example for an elastic showcase. Huge compute demand during production and rendering and then close to nothing when Atomic Fiction puts the whole project into... AWS Glacier. And the content and demo shown satisfied the geeky part of the audience, and probably the developer side in the audience, too - which did not have too much from the Jassy keynote .... which - no surprise - was all about the enterprise.
MyPOVA good start of reinvent for AWS. Significant three product announcements - two that add value for two key areas for AWS - security and mobile. And a new market entry, for the first time AWS will reach end users directly - with AWS WorkSpaces.
It will be interesting to see how all three new offerings will do - with WorkSpaces coming into the most entrenched market space. And AWS clearly has the enterprise in mind - as all three new offerings are key to make enterprises more comfortable and more productive with AWS.
What was interesting to note is how AWS uses its streaming dominance for further product innovation. This maybe only high level - but if you stream video content (of Netflix) to the volume of one third of US internet traffic to a number of different devices and form factors - then rendering game and app information for the same devices and streaming it (Amazon Appstream) is not a completely new challenge. When you than consider that running a user's desktop is a less challenging compute load than rendering high resolution apps and games - and then stream that - then the entry into VDI with AWS Workspaces - is not such a big step either. And both server compute loads for Amazon Appstream and Workspaces are ... highly elastic. And the delivery of streams is something AWS has already mastered.
Looking forward to day 2...
A collection of key tweets from the keynote can be found in Storify here.
As it's getting tradition the day 2 keynote of the AWS reinvent conference was done by the AWS CTO, Werner Vogels. Vogels - or as referred to by his Twittter handle by a devoted developer community as @Werner - was once again at his best leading through the 100 minute keynote in front of 9000 clouderati.
With so far 243 functions and features delivered throughout 2013 AWS is at a record pace of innovation even for their own standard. And Vogels tied things back to the Amazon WorkSpaces and Amazon AppStream from yesterday's keynote as key product deliverables of 2013. It was good to see that Vogels acknowledged that the amount of new features can also be overwhelming, but this will not stop AWS as rapid delivery is in the DNA of the division.
Philosophy Part 4 - Retro engineer starting with the customer(Part 1 - 3 are in yestereday's post here). As we have heard from Jassi (and every other session we attended at reinvent) AWS is customer focused and customers drive the innovation agenda. Vogels now went into how the development teams achieve this, which is almost like a retro engineering, starting with the customer requirements.
So Vogels said, that the AWS teams start with a pseudo press release - that describes what the new product / feature is all about. From there they write the FAQ on the product / feature. Next are the use cases and then the user documentation. The goal of the process is, that the desired features are delivered and not lost in the traditional, e.g. waterfall approach of product development. At Constellation we advise customer to start with the end in mind - this is pretty much the same philosophy applied to product development.
Philosophy Part 5 - Keep it smallThe other key success factor of the AWS development philosophy is to keep the teams small - Vogels introduced the 2 Pizza rule, which postulates that the team should be so small, that two pizze will feed it for dinner. Additionally the AWS teams work autonomously, own their product's road map and work decoupled from product launch schedules.
The other noticeable practice is, that the product teams are in constant contact with customers, working with them on product direction and requirements. Speed is the most important factor for AWS, so the teams work autonomously next to each other. They release their products when they are ready - as fast as possible. The sooner customers have the product in their hands, the sooner AWS can start improving the product is a key benefit AWS goes after.
In our view that is a laudable approach, but AWS needs to take account of the fact that their customers use multiple of their products, and often these need to work well together. With the AWS teams siloed and working on getting their products delivered as fast as possible, it is possible that the customer becomes the AWS system architect. A scenario we think AWS management will want to avoid.
Vogels' showcase for this was the RDS team, that has continuously innovated based on customer requirements - and the key feature released at invent for RDS has now been - PostgreSQL. And with that we were at one of the two major product announcements of Day 2.
The announcement drew spontaneous applause from the crowd and Vogels was visibly happy about the new product.
And Vogels took a stab at the competition, too - the old guard, as AWS management describes them, as being technology and not customer driven, and that the guard adds technology as wished, which leads to unnecessary complexity vs. AWS that only adds features that are customer requested.
Vogels even went so far to refer to lean principles in the AWS development process - ensured by only focussing on what the customers request. So AWS and customers form an epic collaboration relationship.
Netflix awards winnersNext it was the Netflix Chief Product Officer Neil Hunt with Chief Cloud Architect Adrian Cockcroft on stage, talking about how Netflix has been building its platform on top of AWS. And to Netflix credit it has contributed many of its platform components to open source.
Netflix has realized that the developer community is key and created the NetflixOSS cloud prize, that awards a winner $10k in cash, $5k AWS credit and a trip to reinvent. And the 10 winners have truly build innovative software. Remarkably this was completely merit based and not political - as e.g. gentlemen from IBM and eucalyptus won 2 of the 10 prices.
Philosophy Part 6 - The Power of InnovationVogels made clear that all innovations that AWS provides are there to be around forever. They can't be lost and need to be maintained. And Vogels postulated 5 principles around which AWS Innovation anchors:
And Vogels believes that if AWS works hard on all these 5 dimensions, then AWS customers will do well. And the rest of the keynote was structured along these 5 principles.
It is all about IO, stupid (or performance)Interestingly Vogels then mentioned, that it is all around storing and serving the data of the AWS applications that matters to customers, and with that it matters to the division. And for storage the most important KPI is IO performance , and IO needs to be consistent. He then quoted the famoust statement that disks are becoming the new tape. But random IO makes it very hard to get consistent performance out of these systems. So AWS is moving to SSD to provide consistent, random IO.
Instagram is the example that by moving to SSD they were able to move data 20 times faster between middle tiers and backend servers. So now AWS uses SSD, too - and announced the new I2 instances, that on the lower specs are cheaper than the H1 (those gave 120k IOPS) instances.
Not surprisingly AWS uses these instances themselves, and the example of Vogels was to illustrate consistent performance with was of course DynamoDB. Consequently we saw a flat performance chart for average DynamoDB latency. And to aid performance consistency further, AWS announced the avalaibility of secondary indexes on a global level for DynamoDB.
And then it was to Parse CEO and co-founder Ilya Sukher to provide a showcase for consistent performance. Parse markets itself as a cloud on its own - with key mobile, push, storage and analytical capabilities. Sukhar showed lines of ObjectiveC code - first code seen this reinvent - certainly welcome by the audience. The business event that created the showcase for AWS that Parse represents, happened when Parse went on Facebook and its apps volume jumped from a few hundred to 160k.
And AWS also helped Parse to make MongoDB performance consistent using PIOPS - which dropped the base line latency to half, spikes disappeared and overall Parse is now scaling much better as memory warm up time has been cut down by 80%. And finally one of the main benefits for Parse was, that its developers could focus completely on the customer and did not have to worry about infrastructure. And lastly Sukher mentioned the peace of mind for him as a CEO - knowing that the infrastructure can scale with AWS and is no longer something he has to worry about.
Philosophy Part 6 - Flip the Security ModelIn the past it was up to customers to increase security on their data by e.g. turning on encryption. Vogels wants to turn this around and said that in the near future AWS customers will have to explicitly request not to have their data e.g. encrypted. Encryption and other security measures will be the new normal - getting less will be something customers will have to request. Vogels example was that a few years ago there was the discussion that https would be to expensive - but today it's standard. Along the same lines he thinks that security standards that are under cost and performance scrutiny today will be standard sooner than later. And AWS maybe an active change agent in this process.
Specifically for AWS this means that IAM and IAM roles get more important. And it has been achieved pretty well for S3 said Vogels. But how to do this in real databases - which data is accessible for who remained a challenge, for that fine grained access control of DynamoDB is the showcase. For instance mobile applications can access DynamoDB directly - no longer requiring a separation of customers by proxies needed. And then there is now support for SAML 2.0. Only now - which surprised me a bit - but better late than never.
Along these lines Redshift gets encrypted and thanks to a dual key system only the customer and not AWS (or other partners) have access to the encrypted data.
ReliabilityAnd of course reliability is achieved by the availability zones. And AWS sees the usage maturing, with customers even using different regions for their availability zones. The Japanese earthquakes and hurricane Sandy are the recent events that make businesses consider moving availability zones across regions.
And with AWS adding snapshot copy for Redshift, customers get the capability to secure their data warehouse easily across regions. And even more importantly, RDS will allow cross region replicas. This makes migration between regions easier by allowing to spread copies across regions. This gives customers many options for backup - starting from simple backup to a pilot light approach, to a warm stand by solution and ultimately to a multi-site solution like the one Netflix is pursuing.
Equally compute needs to be part of the cost optimization component - and there Vogels stresses how important the spot market is. Customers that are shrewdly taking advantage of the spot market are hungama for transcoding, Pinterest that manages front end operation and was able to reduce cost by 75% and finally vimeo, where the company differs between free and paid accounts. And vimeo free accounts are transposed in the spot market, and paid accounts are transposed in dedicated instances. And the final example was Cycle Computing - which can use all of AWS compute capacity - and they procured 1.21 PFlops with over 16k instances and 264 years of compute to calculate compound formulas.
And the stunning revelation by Vogels in this case was, that the cost for running that massive compute was $33k - versus procuring the compute in a traditional on premise delivery, which would have cost the client $68M.
And Vogels announced also the G2 instances that leverage the NVIDIA 'Kepler' GPUs, have 1536 CUDA cores and are great at encoding and streaming video.
Vogels confirmed that these G2 instances are the backbone for the Amazon AppStream product, that Jassy announced yesterday. But AWS does not stop there - it also announced a new flagship compute instance, the C3. It runs Ivy Bridge and is an SSD based platform.
And AWS would not be AWS, if it would not offer a range of different configuration options.
ScaleThe showcase for scale was the Dutch company We Transfer, that transfers artist wall papers and other attachments that are too large to send via email. And the success of the company is creating a massive scaling problem as a week in 2013 is the same amount of transfers for a month in 2012. And needless to say - they solved that with AWS.
Next up was Mike Curtis, VP of Engineering of Airbnb. Not surprisingly Airbnb is experiencing massive subscriber growth, reaching 4M subscribers in January 2013. And about 150k people are Airbnb hosts at any given night. Again AWS solved the scalability problems for the company. Even more convincing Curtis said, that anytime AWS has something that they could use - Airbnb uses it and does not look further.
Airbnb went from 24 EC instances in 2010 to over 1000 in 2013. Photos are key for guests as they pick their host property through these - and the usage of photos has gone from 294GB in 2010 to 50TB in 2013.
AWS and the Internet of ThingsNext Vogels went over all the many applications of sensor data and real world machines that AWS is enabling customers to work in. Starting with the Nest thermostat, Illumina dumping sequencing data into S3, Tata Motors instrumenting trucks and to predict preventive maintenance, over collaborating with GE on the inudstrial cloud, to helping catching sensor device data from smartphones with startups like e.g. Human, that motivates to be active for 30 minutes a day - it's all happening with AWS.
The combination of the offline with the online world is the common thread of these applications said Vogels. And then he got a little geeky and social showing a life logging application coming from Sweden - presenting his narrative of his last 72 hours in Las Vegas - as the device takes a picture every x minutes.
The showcase for massive real world to AWS connection then was dropcam, with their CEO and co-founder Gret Duffy on stage. And Duffy made the great point that it was not about the hardware, but the software - so dropcam did not have to build the camera - but a camera web service. And interesting dropcam is the largest video service on the web - with more data uploads per minute than Youtube.
And as expected - when moving to AWS usage started to go massively up - main reason was the free inbound of data into AWS - which was a key reason for dropcam to move to AWS. Then Duffy walked the audience through the dropcam architecture - as expected compression starts on the camera and dropcam makes uses of Scala, Python and PostgreSQL heavily using DynamoDB.
AWS gets serious with realtime
A collection of key tweets from the keynote can be found in Storify here.
And you can watch the replay of the keynote here:
The yearly Dreamforce conference kicked off today in full swing, well it hasn't even kicked off as the official start is only tomorrow. But we had a chance to attend the partner keynote - which was so full, attendees had to wait outside before it could get started.
Platform, platform and platform
The ecosystem matters
But then... Financialforce acquires...
But then the ecosystems is still relative small - so acquisitions like the ones from FinancialForce.com today will be watched with a weary eye for players inside and outside the ecosystem.
IBM is serious about developers and therefore open standards
The PaaS future is … BlueMix
And the SaaS Future is … APIs
Watson is … alive and well
HCM + Social + Analytics = a high potential formula
And IBM is serious... about design
What I missed
And my Dozen - serious and some not so serious takeaways on Storify here.
So the 2013 edition of Dreamforce is over and it's time to look at the key takeaways from this 5 day mega event. Needless to say that everything Dreamforce is BIG - attendees numbers, space used, concerts held, bands playing, prizes (1 M$!) etc - which makes Dreamforce the dream event for any CMO. I am sure any marketing professional not working at Salesforce would wish they could spend similar on their customer event. And Salesforce is under pressure to keep outdoing itself - as the hype is a substantial ingredient with Dreamforce.
But let's cut through the hype and let's look what ultimately matters, the technology behind the announcements.
Despite A Litany Of Annual Trendy Themes, Customer and Developers Still Matter Most
Salesforce1's big potential
Advice for Salesforce customers
Advice for Salesforce partners
Advice for Salesforce competitors
Advice for Salesforce
It does not happen often - or ever - did it? - that two major cloud players announce general availability (GA) or a new release of their respective cloud platforms. So HP announced version 13.5 of it's public cloud and Google announced the GA of Google Compute Engine (GCE).
Some common trends emergeThere are some common trends that both the Google and HP announcement show:
HP's focus is the hybrid enterprise cloud load
Google's focus remains the high end public cloud load
Advice for customers
Advice for competitors
The noticeable absence of PaaS elements in the recent cloud announcements reflects the pioneering landgrab mode the market is in - but likewise offers also an opportunity of differentiation. Contrast the recent AWS, Google and HP announcements with Salesforce's Salesforce1 announcement and you can see the opportunity for more developer centric tools and benefits in the overall IaaS battle.
Advice for GoogleGoogle certainly does well with the overdue GA of Google Compute Engine. One has to wonder what held the company back for so long - waiting on the flip side has probably not hurt Google's cloud business either. Opening to more Linux distributions and more kernels was certainly a key table-stake. Likewise Google deserves kudos for not only putting up a partner program but also being able to attract key cloud players as partners.
Google will now have to show, that it can cater to the enterprise needs and that it can attract significantly more load from ISVs and startups as well as end users.
Advice for HPHP deserves kudos for coming out with a release and being very transparent about known issues. If they were too many issues as some reports suggest - is a good question. Only future can tell if the HP executives pressed the green release button too early - or not. The focus on software defined networking and related security issues will resound well with the target customer base.
HP is exposed to the whole OpenStack experience and as with every open source commitment there are pros and cons on a pretty wild ride. HP now needs to build more credibility around OpenStack and further solidify it offering.
MyPOVA good day for the cloud for which some common trends beyond price erosion appear - higher end instances are in demand, networking is (no surprise) crucial and disk is the migration utility.
Bottom line, Google and HP offer divergent and differentiated value propositions. HP is taking the OpenStack road to hybrid cloud, Google is firmly committed to own IP and a high end experience for deployments on its cloud. But the competition with AWS does not stand still - comparing these announcements vs what Amazon announced at reinvent - there remains a lot of catching up for Google, HP and the rest of the IaaS players.
With SAP being a global company it has regional editions of the Sapphire and TechEd events. For anyone following the company it's always a key question if the follow up events to the traditionally leading North American event, are pure repeats - or if there are more or less fine tuned changes.
For this year's edition of the TechEd developer conference it was more or less same same between Las Vegas and Amsterdam - but the event in Bangalore was an event by itself. There is a lot to digest from this TechEd - that just started today - December 11th - and will go until December 13th.
The big pictureSo let's start with what I thought was the most interesting and important part to see if it had changed from TechED in Las Vegas and Amsterdam - and the good news is - it has not changed, the picture is still very much the same:
Wedged between HANA the database, running on the HANA cloud platform are a number of different applications - to which Fiori is the front end. And while this is easy to draw and compelling to explain - it has massive repercussions for SAP customers, SAP partners and SAP itself. More to come in a separate post, hopefully soon.
Takeaways from the press release
To address the fast and agile nature of today’s market, SAP has launched a regional test drive of a new open development environment for SAP HANA. SAP River intends to provide a real-time development environment for rapidly building complete native applications powered by SAP HANA and enable increased developer productivity through a simpler development process, easily maintainable source code and a clear separation of intent and yet an optimized native execution on SAP HANA.
MyPOV - This was certainly the big news - SAP (once again) creates its own development environment and language with SAP River. For those watching SAP long enough, developer productivity was the key reason ABAP was created in the first place and kept in place (ironically it was Shai Aggassi at an event in Bangalore saying that ABAP would be around for a long time, because its so productive) till today.
We will have to see much more of River if that really merits the case - my general view (for now) is - the world (and developers) do not need another development language and environment.
OpenUI5, Open Source Variant of the SAPUI5, for the Developer Community
SAP is contributing key portions of its SAPUI5 framework as open source code on the GitHub site under an Apache Version 2.0 license. As part of the effort, SAP has developed different themes, framework features and control libraries to help provide a consistent end-to-end experience and foster reuse and efficiency, for example, via responsive layouts. - See more at: http://www.news-sap.com/sap-increases-focus-on-developer-experience-and-makes-key-open-source-contributions/#sthash.zdnNPNQ4.2Y7JqRMY.dpuf
MyPOV - This was widely popular by the audience and in social media. And in general it's positive when large tech vendors contribute to open source and put muscle and investment behind it. But usually they take a lot from open source and then contribute to the project with some fraction of investment. OpenStack is a prominent example for that. But in SAP's case with SAPUI5 - now OpenUI5 - SAP is the only contributor - and the question is, who will collaborate and invest into it and into which areas. If there was someone, SAP should have had them on stage and in this press release.
Another potential red flag are the details on what SAP will keep and not contribute - something that wasn't communicated as I wrote this (yet).
New Service Broker for Cloud Foundry to Connect to SAP HANA
SAP announced the code contribution and availability of a Cloud Foundry service broker for SAP HANA. Developed in close association with Pivotal and now available as open source on GitHub, the service broker will allow any Cloud Foundry application to connect to and leverage the in-memory capabilities of SAP HANA. -
MyPOV - Long awaited and now it's there - and good move for HANA - the database. With the new River SAP is competing with Cloud Foundry as a platform for developers to build next generation applications on. I am sure SAP and Pivotal will separate that by the type of applications you can build with each platform - but the partnership with Pivotal becomes now more of a database partnership than anything else.
New Node.js Connector for SAP HANA
In November 2013 SAP contributed SAP HANA database client via GitHub site that enables developers to efficiently connect Node.js applications to SAP HANA. - See more at: http://www.news-sap.com/sap-increases-focus-on-developer-experience-and-makes-key-open-source-contributions/#sthash.zdnNPNQ4.2Y7JqRMY.dpuf
My POV - This goes along the same lines as the previous paragraph on CloudFoundry - only here SAP does really good to open source in the sense that a tech giant contributes (and maintains presumably) an open source framework's usage (in this case the popular node.js framework) as part of a much larger open source project. Of course with the interest of more node.js projects uptaking HANA as the database - but that is a very legitimate motivation for SAP. Well done.
New Beta Version for SAP PowerBuilder
A new closed beta version of the SAP PowerBuilder application development tool is intended to support new operating systems, databases and open standards and is planned for limited availability on December 15, 2013. SAP is committed to innovation without disruption and with the new beta release, beta customers can use their current investment in applications, knowledge and skills while they take advantage of new SAP innovations.
MyPOV - With the Sybase acquisition SAP also acquired one aging, but prominent application development platform, the leader of RAD race of the 90ies, PowerBuilder. It's good to see that SAP keeps investing into the product - as there are still a lot of clients out there running applications on PowerBuilder. If I understood Reh correctly, PowerBuilder will be able to generate SAP River RDL code and thus become a code and (hopefully for SAP) a developer hands contributor. If that will work out - we will see - but it's good to see, that SAP leverages all developer assets and mind share it has to its disposal to get River off to a running start.
Easy Access to SAP Technology
A unified developer license covering all major platforms and technologies significantly simplifies the developer experience and makes it easy to get access to key SAP technologies.
MyPOV - Always good to simplify licensing for developers - the details will have to be hashed out. But given that it looks like Schmerder created and updated it - I am pretty confident it's the best that SAP was able to do at this point. Others will dissect and find the issues.
Other key takeaways
As we talk about rivers and the event was in India - not surprisingly there an Indian River inspiring a new codename - the Ganges. And SAP Ganges is the latest attempt of SAP modelling the commerce happening between its customers. Remember SAP tried in the dot com boom era with SAP markets - and wasn't that successful - but with the Ariba and hybris assets in house, it's well worth it to give it another try. It looked like Mondkar annunced it's development roadmap for Ariba till Sapphire in June 2014 - let's see what we hear and see between now and then. SAP Mentor Dick Hirsch has written a great first take here.
And then Fiori is making progress, progress it needs to make as it was recently (TechEd in Las Vegas) declared as the one and only SAP UI going forward. For that it needs to gain capabilities way beyond it's self service user interface origins - and the demos of the sales forecast and manufacturing planning are encouraging - but there is a lot more that needs to happen for a professional user's user interface, powerful enough to be the user interface from 9 to 5 plus overtime.
It's good to see SAP courting developers, but so is everyone else. If I put my developer hat on and compare the TechED messages with the ones I heard at Google I/O, Microsoft Build, VMWorld, Oracle's JavaOne, AWS re:invent and Salesforce's Dreamforce in the last 6 months - then SAP and River have their work cut out. All these vendors court the enterprise developers and have larger developer ecosystems than what SAP has ever had control over. Valiant to scramble Powerbuilder folks and kudos for supporting it - but that will not give SAP critical mass.
If SAP e.g. had announced support for the super popular docker both to build on and to deploy on HANA - I am pretty sure it would have garnered (even) more attention and ultimately - more apps running on HAN,A.
But then SAP is the largest business application vendor and certainly can try to attract developer masses into its proprietary architecture - that's the control ultimately all enterprise giants want to have. But Salesforce.com struggles and ultimately little success with Apex show that the proprietary path SAP seems to be taking with River is a very challenging path to success. But never say never. I have lots of respect for Klein and team.
And you can find the Storify here.
Since quite some time there as been wide agreement on some criteria for cloud based applications - namely that they had to support multitenancy, all the way down to the database level, they had to be deployed to a public infrastructure, they would only have one production line of code and so on.
Already slaughtered - no customizingFor a long time the cloud application vendors have been maintaining, that they cannot support any form or shape of customizing - as they otherwise would no longer have a cloud application.
In my view this was a little bit of a self serving argument as it allowed the vendors to move fast and with little complexity from release to release. As a matter of fact, the agility the cloud provides is most beneficial to vendors.
But to be fair more and more vendors start to support some more or less elaborate ways of customizing their cloud applications. So already in 2013 we did not hear the moniker - 'we are cloud we don't allow customization'(much) anymore, which is good progress.
The first to go in 2014 - database multitenancyWe have already written in mid 2013 that database multitenancy - as we knew it - as being a database containing rows of data owned by different clients - is largely an architecture of the past. It was largely required due to hardware constraints for the very first cloud architectures - but should not be deployed for a modern cloud storage in 2014 and onwards. Too many advantages speak for the end of database multitenancy - most prominently access security, predictable performance, and operational advantages.
You could argue that database multitenancy has already disappeared in new, state of the art cloud architecture - but probably 2014 will see this cloud architecture feature of the past being driven close to extinction in modern cloud applications.
Next - public shared infrastructureA table stake of cloud architectures used to be, that cloud applications had to be deployed on public and shared infrastructures. And while that is desirable for most applications, there are more companies out there, that do not want to have their applications being hosted on a public infrastructure. Some may say this is triggered by the whole NSA / Prism sensibility, in my view, the saturation of early cloud adoptions and the need of cloud application vendors to grow revenue wise, play an equal important role.
And the vendors are reacting and gearing towards that - the AWS government cloud is an example. Salesforce supporting the HP Superpod goes into that direction. And then with most cloud application vendors embracing OpenStack, a deployment of their cloud applications on a on premise OpenStack infrastructure is technically possible and in my view - likely in 2014.
Of course cloud purists will now roll their eyes - and start to argue... we will see what 2014 brings.
And then - one release for allAs cloud applications get more and more adopted - it becomes more of a challenge to upgrade these applications centrally and synchronously for all customers. Cloud vendors have for the longest time argued (and in my view even a little hidden) behind the fact, that if a single customer would have their own version of code - they would be no longer a cloud application. That's of course not accurate... and with more flexible deployments in 2014 - we will see cloud vendors to begin supporting different code levels by customer.
Implications for customersIt will be key for customers to make sure their vendor supports the more complex code deployment landscape that results from slaughtering some of the sacred cows. Do not take 'that's not cloud'as an answer anymore in 2014.
Implication for vendorsIf you are not revisiting you code delivery and application configurability and deployment options, 2014 maybe a rude awakening for you. Better to disrupt and be early on these trends than be disrupted by the competition. Look at OpenStack as the easy way out, that a number of the larger cloud vendors have already adopted or at least are heavily looking into.
MyPOVNobody knows what the future holds - otherwise those who knew would play the lottery and win every week.... but it's about time, that the criteria that defined a cloud application for the longest time - are being revised by the market. Even sacred cows do not live forever.
In my view - a lot of that will happen in 2014.
P.S. Don't miss the 2014 cloud trends that fellow Constellation Research colleague Ray Wang and I have put together here.
In a press release today IBM announced it's intention to extend its current data center landscape from currently around 20 to up to 40 data centers in 2014, effectively doubling the data centers running on SoftLayer architecture (our take on the acquisition is here). To get there IBM plans to invest 1.2B US$ in the coming quarters.
And with that IBM kicks of the cloud data center monopoly race - as we borrow from the popular board game. Of course the board game's most prized possession - the Boardwalk - does not apply to the cloud data center race - as the player will not be able to own a country exclusively. But a land grab it equally is - as you can only build a finite number of data centers in a year, and put only so much fiber in the ground. So being early and fast matters.
Location, location, ....We may be seeing the old brick and mortar retailer adage applied to cloud data center locations in 2014 and going forward. Location matters for a variety or reasons, the most prominent among them are:
What will be the 2014 prized possessions?
|IBM InfoGraphic from here|
Load drives the location strategy
Watson throws his weight in
SoftLayer is key
On January 23rd Workday officially unveiled its Update 21 to the public in various webcasts, the details of the release can be found in the press release here. Another highlight was the new user interface that Workday is releasing to its customer by end of January. All the highlights of Update 21 can be found here.
A new user experienceWith Update 21 Workday (finally) is moving away from its decision to run its user interface on top of Adobe flex - a decision that in hindsight (that is always 20:20) - wasn't the best. It is never easy for enterprise software vendors to change user interfaces - first and foremost for their customer's users that need to learn a new user interface - but it is also a lot of work for the vendor. Even when you build on a declarative architecture as Workday says they do, it means testing and validating all the screens - not to mention the time to design and improve existing screen layouts. There are simply only so few hours and days between releases in the SaaS world - so moving the whole user experience is a huge task that the Workday team deserves kudos for.
Workday also seems to have gotten away with one of the major challenges of the previous user interface, mainly around piling up pop ups over each other. No enterprise application can probably exist without pop ups, but the current Workday user interface implementation of rolling the pop up over the existing user interface canvas is certainly an elegant approach to the usability challenge. Familiarity with consumer applications on smartphones and users mastering these user interface concepts, should certainly help.
|The new Workday start screen - from here|
They new user interface is also the farewell to a marquee Workday user interface control - the wheel - that may have overstayed its welcome - sometimes its amazing how long certain interface elements can hang around. Workday has replaced the wheel with a pure list of icons - a user interface paradigm that scales much better to a dynamically growing enterprise application.
Moreover, Workday has implemented a new and more powerful search capability - which will certainly benefit line managers and HR professionals. It will require some getting used to though - as it breaks the habits of searching hierarchically through the data model - something not desirable - but users have been trained to do this for decades. It will be very interesting to hear user feedback on this new capability.
|The New Employee Profile - from here|
Lastly Workday also showed some improvements to smartphone and tablet user interfaces - not that intrusive and advanced as the browser interfaces - but continued progress. Not surprisingly the user experience between browser and mobile / tablet differ - and Workday said that in 2014 that will be something to harmonize. That harmonization will certainly by important for the upcoming recruiting functionality - that was designed along the mobile first credo - but users will certainly use both the browser and their mobile device. And Workday already opened more of the same challenge with Update 21 - enabling managers to hop back and forth between browser and mobile device during the performance management process. The more seamless and smooth Workday can make the user transition between the platforms - the more a win for their users.
And some more functionality
HCM Functional Richness continuous to trend down
- Outlook Integration
- Chatter Integration
HTML5 Support for non IOS devices
- New modules
- Global Support
Big Data Analytics
New User Interface for browser
- Talent Reviews
- Career Interests
- Cornerstone Integration
Performance Management Enhancements
Android Native Support & iOS Mobile Enhancements
User Experience - Configurable Grids for Compensation
Performance Management across browser and mobile
- Payroll for Canada
- Payroll Connector
More custom fields
- Notbooks for iPad
Legislative support - Report for ACA / RUP
Higher Education Functionality
207 Features / 80 Brainstorm Items
246 Features / 67 Brainstorm Items
And no mention of...
Again one hour is not a lot of time - but showing consistency (e.g. iPad functionality and new grid control) or progression on new platform capabilities (BigData Analytics) - would be a welcome content direction for future updates.
The customer perspective
It was good to hear, that Workday is working with customers and listening to their findings and needs - it will be very interesting to hear from the trenches and some real live experiences in the coming weeks. Let's hope for the industry, Workday customers and Workday itself that this will be a smooth transition, as UI improvements are key for the overall success of the SaaS market.
At the same time HCM automation needs and desires of customers are rising and the competition is not standing still. HCM customers want to have more and more automation from a vendor, delivered as an integrated system that addresses HR Core, Payroll, Talent Management - and to various depths Workforce Management. Globalization forces are hitting enterprises fast these days and global support not only for localization but local compliance including payroll become quickly table stakes. Workday will have to chart its plan to functional completion and more global compliance coverage soon, better sooner than later.
Earlier this week IBM kicked off their yearly Connect conference in Orlando. Think of Connect as the conference for all things formerly Lotus, collaboration, customer and employee experience - and everything around the Kenexa products. With over 5000 participants the conference was well attended, in a Q&A session IBM shared, that over 35% of attendees are first time attendees. That's an encouraging sign for renewed interest around the products being part of Connect.
Opening Keynote TakeawaysAnybody sleepy for the (early) 8 AM start was certainly woken up by American Authors, Jay Bear led through the keynote with his usual flair and Seth Meyers told us how social media has changed recruiting in the comedy space. It used to be - have you seen the comedian? - today it's - what is your Twitter account?
|Notice Hayman's shoes - un-IBM-esque|
Check out my colleague Alan Lepovsky's blog - he will have something on the re branding to Connection and the overall social and collaborative take aways up there soon.
From the cloud perspective the interesting part is that IBM announced to move Domino Application capabilities over to a PaaS powered by - no surprise here - SoftLayer. A good move for IBM to bring more load to it's SoftLayer data centers and help customers to renovate their Domino Applications.
Demo Showcase - well done - but still dizzying amount of productIBM faces the challenge on how to tie together the diverse product set shown at Connect in a compelling keynote demo. And IBM did a good job - picking a bank as the showcase - with IBM employees picking roles that dealt with customer experience, talent management and mail / collaboration. The product scope is dazzling - and while IBM did a great job of showing the product name in the middle of a giant screen - as it was demoed - it was hard to keep the overview.
|IBM Social Learning as part of the demo with Abby Euler and Tim Geisert|
Customer testimonials by Pepsi, Performance Bicycle, Sika and Petrobas were powerful and spanned the range of products well.
Kenexa Talent SuiteNot surprisingly Kenexa is bringing together its various products in a talent suite, complemented by the new Connections collaboration suite. So in detail the Kenexa Talent Suite comprises
Kenexa keeps growing
Product vs Services
You can find a Storify collection of the keynote here.
Oracle had their first HCM World conference from February 4th to February 6th in Las Vegas and it rolled out the big guns - both CEO Larry Ellison and President Mark Hurd presented - something you usually do not see at an Oracle conference - except for Oracle OpenWorld. It was good to see both executives so well versed in people processes and the HCM pitch.
Nobody questions Oracle on technology, so whoever had the idea to de-emphasize the technology aspect for the sake of the thought leadership aspect, deserves the big kudos on the Oracle side. Apart from a demo between Mark Hurd and Gretchen Alarcon - software was relegated to the many track sessions. And beside from Larry Ellison's presentation and a few glimpses of Chris Leone's - it was never the talk about the technology - but always about the challenges and struggles enterprises and Chief People Officers face every day. And the good news was - Oracle lead with the business problem - and not the technology, a radical departure from the past the traditionally wanted to convince customers mainly on the technical merits of its products. Doesn't work for the HCM audiences, so good move to adjust to it.
Oracle HCM Cloud ProgressWe were surprised by the uptake of momentum of Oracle HCM cloud back at OpenWorld - but can now report continued momentum in customer uptake. Oracle is close at moving calendar date and go live progression on a proportional scale - which is a pretty encouraging sign. Customer go lives are all over the globe, speaking for a good sales effort beyond North America, which can be a challenge for US based vendors.
More importantly, the road map is not fizzling out in terms of functionality richness, and the upcoming release R8 (sorry all details were under NDA) includes new functionality in the areas that are hard for all vendors to create and maintain. It was equally good to see that Oracle is tackling (and needs to keep tackling) the integration issues - both horizontally with other applications as well as internally to Oracle HCM Cloud between the Oracle Fusion HCM side and the Taleo side. This integration remains one of the weaknesses of the Oracle HCM portfolio - but Oracle is confident that it can convince customers to accept the integration issues due to significant functional leadership on the recruiting side. And while that functional leadership is certainly given for now - it will be an area to keep a watchful eye on in the future.
Oracle also had the bravado to let the analysts play on the new iPad app - and can now successfully claim that even analysts can use their HCM system. Few other vendors have given the opportunity to the analyst community and this certainly raises the ante from a competition perspective. From the 90 minutes using the iPad app we can say that it's highly functional, easy to use and built in such a way that it is highly likely to make an every day user (like a manager) happy using it.
Next steps for Oracle HCM cloudAs mentioned most information about the roadmap was under NDA - but Oracle lifted the kimono on a few areas that can be shared:
|Screenshot of the new HCM Cloud User Experience|
Peoplesoft - Alive & Kicking
In its earnings press release Workday of February 26th, the company disclosed the acquisition of Identified, a San Francisco based start-up that had specialized on recruitment success using analytical algorithms.
A recap of IdentifiedThe company has tremendous talent and started with condensing social network information into tangible (and if you will actionable) candidate profiles, compiled out of available information from social networks. With that Identified reverses the classic recruiting process - that expects candidates to come to the recruiting company - into a mini headhunting process - where the recruiter can start actively looking for candidates.
Needless to say the collection, identification and creation of a candidate profile are no trivial task and Identified spend significant time on this, creating their patent pending SYMAN process. The recent demos of the product then showed the search over these profiles. Back in January we missed some analytical tools beyond the candidate profile that would help the recruiter focus on the right candidates without going through the overall result list, e.g. a scoring model to serve the best candidates to the recruiter. Good news was - this was on Identified's roadmap.
|Identified Screenshot from website.|
But then it looks like Identified seemed to have run out of steam on the business side. And probably needed more capital to build engagement and interaction functionality for recruiters, more ATS functionality and also had to cater to the need for mobile support for recruiters. The company had taken 22.5M in funding, a financial analyst disclosed the acquisition was 15M - not a good exit unfortunately.
Why Identified for Workday?Workday is currently building its recruiting solution, expected in the first half of 2014, but its first release will miss sourcing functionality. My expectation is that Workday will do a replay of the playbook we have seen with 3rd party compensation data. Use its new BigData Analytics product capabilities to either look at social network data and / or the classic sourcing providers and put it onto the AWS based storage for BigData Analytics. Then use the SYMAN algorithms to collect and condense to a candidate profile and serve these to the users of its new recruiting product.
This has a number of pretty intriguing consequences:
BlueMix - lot's of Blue on a Greenish platform
|IBM Infographic on the 1B investment.|
|Crosby with 'born on the web' SoftLayer customers|
Softlayer is the go forward answer for all things cloudyNot surprisingly IBM keeps strengthening its SoftLayer commitment. Reading between the lines it is clear that the SoftLayer x86 legacy cannot support the Power based Watson plans - so not surprisingly IBM is bringing SoftLayer to run Power based systems. There was some confusion what's being brought to who - SoftLayer to Power or vice versa - but that is all good for a company like IBM finding and charting its course to cloud.
Likewise IBM will invest more into the DevOps visibility and capability of SoftLayer resources. SoftLayer had more of a run-time DNA in the past for their clients, so bringing the additional flexibility to run more and better development cycles with SoftLayer only augments the platform.
|LeBlanc introduces BlueMix|
BlueMix Moves Beyond the Big Blue LegacyThere are a few takeaways that this is not our father's IBM:
|Coding on stage - pair programming with Lawson and LeBlanc|
The bigger picture - 21st century enterprise applicationsAt the end of the day the question is - what will 21st century enterprise applications look like, how are they build and on what platforms do they run. We know the traditional ERP suites of the late 20th century are not the answer for a digital economy. We also know that building and managing software is getting more and more complex and more automation to handle this complexity coupled with less lock-in are promising directions for the future of building applications. Which leaves us to the only thing we know - that the 21st century apps will run on virtualized environments in the hybrid multi-cloud.
More to the applications themselves - it's clear that IBM is gambling on the API economy promise - to be able to bind together the APIs of IBM SaaS properties, with other non IBM APIs on a powerful platform. We know that this platform is BlueMix, which makes it a very strategic asset for IBM’s and its customers and partners future.
Implications, Implications...So what does it mean for ...
MyPOVFor the longest time the ultimate application strategy for IBM has not been clear. Acquisitions seemed to be opportunistic and all over the map - at least from the outside. But maybe it was always the plan to move to the API economy - ultimately. Only Steve Mills will know. And interestingly Mills spoke about being able to do things better and a greater scale than ever before (see interview here).
In spring 2014 IBM's future looks remarkably better than 12 months ago.
SAP announced new offerings for SAP HANA Cloud Platform. Customers now have the ability to choose from three offerings: SAP HANA AppServices, SAP HANA DBServices, and SAP HANA Infrastructure Services.
New pricing options for SAP HANA intend to broaden the reach of the SAP HANA platform and make it easily accessible to everyone starting with a base price with add-on options available as desired. Customers buy through a consumption model and can either implement end-to-end platform use cases or choose additional options as needed such as predictive analytics, spatial processing and planning. This significantly increases the opportunities to get started with SAP HANA and affords customers the ultimate ability to innovate.
To address the growing apps economy, SAP announced new and enhanced offerings for SAP HANA Cloud Platform. Startups, ISVs and customers can now build new data-driven applications in the cloud. This platform-as-a-service (PaaS) offers in memory-centric infrastructure, database and application services to build, extend, deploy and run applications. SAP HANA Cloud Platform is available today via SAP HANA Marketplace. Customers can gain access to SAP HANA in as little as 30 minutes and immediately benefit from a unified platform service for next-gen apps, and can easily buy, deploy and run with the flexibility of a subscription contract.
|A very good representation of services by Matthias Steiner from here|
The new SAP HANA Cloud Platform offerings are available for a simplified trial and purchase experience in sizes ranging from 128 GB to 1 TB of memory on , an online store that lets customers learn, try, and buy applications powered by SAP HANA. SAP Fiori™ apps and hundreds of startups and ISVs offerings are also available on the site.
The SAP HANA platform continues to change the world of compute and what is possible. SAP is paving the way for real-time personalized medicine with SAP Genomic Analyzer, a new application powered by SAP HANA that aims to allow researchers and clinicians to find breakthrough insights from genomics data in real time. Currently in the early adoption phase, key planned benefits include faster data processing through various stages of genomics pipeline — alignment, annotation and analysis — and immediate analysis of data in minutes rather than days. Researchers are envisioned to be able to analyze genetic variants of large-scale cohorts to find patterns of variation within and between populations. With better identification of clinically actionable genetic variants and real-time visibility into “in-the moment” situations, clinicians shall be able to understand and personalize care to patients with diseases such as Type II diabetes.
This new world record demonstrates the ability of SAP HANA and SAP IQ to efficiently handle extreme-scale enterprise data warehouse and Big Data analytics. SAP and its partners had previously set a world record for loading and indexing Big Data at 34.3 Terabytes per hour.A team of engineers from SAP, BMMsoft, HP, Intel, NetApp, and Red Hat, built the data warehouse using SAP HANA and SAP IQ 16, with BMMsoft Federated EDMT running on HP DL580 servers using Intel® Xeon® E7-4870 processors under Red Hat Enterprise Linux 6 and NetApp FAS6290 and E5460 storage. The development and testing of the 12.1PB data warehouse was conducted by the team at the SAP/Intel Petascale lab in Santa Clara, Calif., and audited by InfoSizing, an independent Transaction Processing Council certified auditor.
The state of SaaS
|From Kurian's presentation.|
New – Information as a Service
The state of PaaS
|From Kurian's presentation.|
The State of IaaS
|From Kurian's presentation.|
A word on private cloud
|From Kurian's presentation.|
|From Zavery's presentation.|
Also worth a look for the full picture
More from me on the SI business: