Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels

Channel Catalog

Channel Description:

... blogging on what is happening in enterprise software, with a focus on Future of Work and Next Generation Applications, sparkled with occasional musings on the the state of the industry and outlooks where we are heading.

older | 1 | .... | 4 | 5 | (Page 6) | 7 | 8 | .... | 31 | newer

    0 0

    I will be attending Kronos' user conference next week in Orlando - and when I got the customary what to do email from the vendor - I stumbled over a section mentioning some interesting marketing approach combining twitter buzz with charity.

    In the email Kronos basically stated, that it would donate one US$ to a charity for every tweet containing the hashtag #KW2013 - up to a limit of 10k US$. That seemed to be a pretty innovative way to combine what every vendor wants - publicity on social media, in this case twitter, and attention by supporting a charity. 

    I asked the Twittersphere this morning if anyone has seen something similar - and till now no one has come back - so kudos goes to Kronos for championing an innovative way to get attention on Twitter and equally making people feel good about tweeting as every tweet containing the hashtag will increase the amount of the donation.

    Some more thoughts on the topic

    • There is always a trade-off between paying for social media attention and not over commercializing. The charity aspect takes away that conflict.
    • I am not a social media marketing expert - but I am guessing the never old chase of attractive return on marketing $s spent - is pretty good on these 10k US$. Advertise the hashtag and promote Kronos on Twitter may work, too - but my gut feel is, that it may have a worse return of marketing dollar spent.
    • Kronos could increase the attention of this (or maybe should) - by updating regularly on how much was already donated at point x of the conference. And then - assuming success - increase the total donation limit.
    • Lastly a vendor could get the twittersphere even more involved by asking to help find out which of e.g. three charities should receive the donation. So move from tweet to donate to tweet to where to donate - more involvement - more attention more social media buzz - that's what you want for a conference. 


    Smart marketing spend by Kronos, kudos - and the first time I run this across - so well done and kudos for supporting a charitable course... and at least it got me to tweet about it - and even do a short blog post... 

    0 0

    It was interesting to attend the Kronos user conference, Kronos World, earlier this week in Orlando. Kronos is a key player in the HCM market, being the overall leader for workforce management. Quantitative specialist analyst firm IHS estimatesthat Kronos has a worldwide market share of 25% and a North American market share of 37%. And Kronos grows twice as fast as the market with 10% in FY 2013.

    The conference was well attended with approximately 1700 attendees, it’s largely seen as an educational event – with an impressive number of training sessions being scheduled. 

    And the user base is – compared to other enterprise software conferences – amazingly open on sharing experiences and challenges. It looks like the challenges around implementing workforce management software are such, that companies are open and ready to share them with their peers. This is even more interesting since Kronos has a very strong vertical focus – with separating sessions, tracks, meals and events by industry vertical – all the way to the lanyard color of attendee’s name tags. So customers are practically sharing best practices and lessons learnt with their peers.

    Good Keynote

    As usual with user conferences, the first day was dominated by the keynote – delivered by Kronos CEO Aron Ain. Ain did a good job on stage, coupled with some appreciated humor and remarkably he delivered his part of the keynote without any supporting slides, which was a welcome change. 

    Kronos CEO Aron Ain on stage

    No keynote without customer testimonials – well done and on stage – and one of them, the City of Houston, was one of the highlights of the 2013 user conference keynotes: It does not get much cooler than having a customer on stage talking live about their experiences, demoing by themselves their implementation of analytics  live – on an iPad. Tough to top. And equally Fifth Third Bank was strong reference for Kronos. 

    The guest speaker was Jeremy Gutsche, the founder of, who delivered an energetic presentation – only challenge was how relevant the more generic and mostly B2C trends were for the audience.

    What’s news?

    Kronos  unleashed a number of press releases – let’s look at the key ones:

    • Kronos cloud efforts are runningat full steam with now 9000 organizations using the Kronos Cloud. It’s not clear what percentage of the Kronos customer base that is – but certainly there is interest in the customer base to move to cloud. The most mentioned drivers by both customers and prospects were the lower deployment costs and the ability to move update and upgrade burdens to Kronos. Unfortunately Constellation Research did not have the time to dig in more in detail into the Kronos cloud infrastructure – so we can’t cast a vote at this time how much this maybe a hosted vs. a true cloud offering. Regardless of this -  customers seem to content when using the cloud products and interested to even eager to move to them

    • Kronos keeps investing into key mobilefunctionality – as more and more workforce management activity is not recorded on a desktop but on a mobile device. The added offline capability is a welcome feature, as wireless network coverage is still not ubiquitous and we wish more HCM vendors would take notice and provide likewise offline capabilities for their mobile offering. Later this year geo-fencing capability is coming - a key convenience and automation feature for workforce management software. And finally the demos all showed neat annotation capabilities where a user can mark critical data and then share the annotations and / or screenshots with other users. With availability for iOS, Android and Blackberry Kronos covers the key platfor

    • Socialis a trend not excluding workforce management, so Kronos (wisely we think) decided not to build but to partner – in this case with Tibco’s tibbr product, to bring better social practices and collaboration to its user base. The company even dabbles into gamification – though only through allowing managers to post statistics into an activity steam – a good starting point.
      In our view overall a smart move as any shift planner knows that every day work is a communication and social challenge. The product will ship later in the year and we look forward to see its first incarnation.
    Kronos Social

    • Equally like social – there is no enterprise software user conference in 2013 that does not touch analytics. Kronos is partnering (wisely again we think) with Microstrategy and delivers a number of powerful data visualization and exploration capabilities around workforce management data. Unfortunately - like many other vendors – Kronos is using the fauxanalytics term – describing a nicer version of dashboarding and not enabling true analytics that take action or at least suggest an action. But what isn’t can still come true later.  
    Kronos Dashboard

    • And let’s not forget the Kronos is also a hardware company, buildingand selling the inTouch devices – that keep getting upgraded regularly. The ability to clock in without a badge, thanks to a high quality optical sensor, will certainly be an efficiency booster for this scenario. And of course it supports touch and plugs well in across connectivity options and firewalls.

      Addressing Quality Issues

      Kronos customers have been plagued by desktop side Java that does not run beyond Java 6, causing a lot of headaches and even compelling them to take system upgrades. The good news is, that Kronos is eliminating the client side Java and replacing the former flash UI with HTML5 – but it wasn’t clear what the company will do with client side logic – at least not when we asked the VP of Global Product Management Bill Bartow in one of our meetings. 

      And beyond the Java issues customers also mentioned a number of quality problems. But the good news is that Kronos is actively addressing these issues.

      And as Constellation Research has stated before – platform and quality issues – though undesirable – are a realistic ingredient when dealing with enterprise software. What matter is how well does the vendor respond, resolve the issues in partnership with the affected customers and then implements the right safe guards to avoid future challenges. And to give Kronos credit the company is certainly aware of this and knows that the it has to deliver high quality products with its December releases.

      Charitable Tweeting

      It was a nice touch by Kronos to donate $5 per tweet (the amount was increased) containing the conference hashtag #KW2013 up to a $15k limit for a charitable cause - certainly a classy move and probably a very good return on marketing. Read more about it on my original post here

      Going forward

      Kronos is doing all the right things addressing three of the key disruptors for enterprise software with mobile, social and analytics. The company is certainly most progressed on the mobile side, then analytics and then social – but all three topics should be addressed with significant enough breadth and depth by end of 2014 – so the question is – what is next for Kronos on the product side. 

      Earlier in 2013 Kronos unveiled its partnership with SAP Sucessfactors - Successfactors president Shawn Price was on stage with Ain - and from all we heard from executives and customers the parnership of combined sales team selling their respective products is going well. But there needs to be more than a cloud partnerhip for Kronos. 

      So the analyst and blogger community probed the executive team many times on this – but the answer has been always that the company will stick to the workforce automation theme. So no possible forays into core HR, payroll, e-Learning, performance management etc. areas. The most that could be extracted from Kronos CEO Ain was, that the company wants to go after the remainder of their customers employees, that they do not reach – yet. If this means a Kronos ambition into project management – remains to be seen.

      But let’s not forget that the company is in the enviable position to face only few and relatively weaker competitors in North America and has practically no competition for global workforce management needs.


      It’s good to see an enterprise software vendor executing and Kronos certainly is executing well. And while the company needs to make sure to deliver quality software and deliver on its growth plans both up and down scale in the market and globally – it must be a good time for Kronos right now. And good times for the vendor translate – with some delay – to good times for the customer base. And the customer base certainly had a good time in Orlando.

      Longer term – later in 2014 – Kronos will have to address strategic growth plans, that in our view need to be fueled by expanding the automation portfolio that Kronos offers. But let’s get customers and our hands on the December releases first.

      We did a Storify of key tweets from the keynote that you can find here.

      0 0

      When it comes to cloud integration – there are basically three larger vendors to watch – Tibco, Software AG and Informatica. The latter timely released it’s Winter 2014 of the Informatica Cloud family of cloud based integration and data management solutions. So time for a news analysis on the press release of November 12th 2013.

      Informatica timed this pretty well, kudos to the marketing people for seizing the timing and the developers for providing the release on time, at the eve of the largest IaaS vendor user conference, AWS reinvent and a week before the user conference of Informatica's largest cloud integration partner – And with over 60 billion monthly cloud transactions – there is certainly relevant data moving across the cloud on the Informatica platform.

      So what is being announced to come in Winter 2014?

      Cloud Integration Innovations: Driving Productivity in Real TimeInformatica Cloud Winter 2014 introduces new advances in real-time data and applications integration, while also placing more productivity-driving power in the hands of line-of-business users. These advances include:
      • Unifying Cloud data, service, and process integration– New service and process integration functionality enables users to build standards-based, advanced real-time integrations in the cloud across application services, processes and systems. The model- driven design approach enables LOB process analysts and integration developers to be highly productive developing service orchestration and process automation applications with improved collaboration and without requiring specialized development skills.Integrations developed on Informatica Cloud can be exposed as REST APIs to enable seamless access to processes and data to any consumer application, including mobile applications, web mashups and business partners. 

      MyPOV – With a lot of cloud purchasing power residing in the LOB it is a good move by Informatica to support a closer to the LOB enabled integration.

      • Crowdsourcing integration processes– New Global Repository expedites sharing of reusable cloud integration processes within the enterprise or across enterprises to increase development productivity, foster collaboration on complex business processes and accelerate time-to-develop and deploy integration projects.

      MyPOV – There is a lot of dynamic and even more hype around crowdsourcing. But at some point this trend will take off and be relevant. Most likely more with business users than with technical users – so it’s a good step for Informatica to enable repository based re-use. And it’s easy – it’s all in the cloud already…. And it’s a good monetization option for customers and partners alike. We are waiting to see LOB execs haggling and trading their Informatica integration assets at a user conference. Seems far out – but we think we may see this come sooner than most vendors think.

      • Seamlessly integrating interdepartmental processes– New Mid-Stream Writer empowers users to create advanced data flows enabling business processes that span departments and their individual cloud applications (e.g., Salesforce CRM, Marketo, Workday, etc.) for increased business productivity.
      MyPOV – The capability to influence and transform data in the middle of a integration flow is very powerful. We need to learn more on the specific capabilities but it’s certainly a powerful and elegant feature for the Informatica Cloud.

      • Extending SalesforceOutbound Message Supportwith Custom IntegrationTemplates to easily implement re-usable integrations triggered by events in Salesforce.

      MyPOV – A key feature to understand better what is happening inside of Salesforce. We will have to see what Informatica will be able to offer beyond messaging in upcoming releases.


      New Cloud Automation Enhancements: Increasing Mobility and User Productivity
      Natively integrated with Salesforce, Informatica Cloud Extendenables business users to create and deploy custom guides that automate Salesforce CRM processes and ensure that best practices are followed. Informatica Cloud Winter 2014 introduces new productivity features to Cloud Extend, including:
      • Increased mobility– The Cloud Extend iPhone mobile app, previously available as a preview release, is now generally available. A new Cloud Extend Android mobile app is now available as a preview version.

      MyPOV – The appetite to consume all things on mobile devices does not stop for integration data, process and status. A good move by Informatica.

      • Simplified guide creation– Users can now easily create embedded sub-guides from within a parent guide that will use them, as well as quickly view and navigate to all guides that embed a particular sub-guide.
      • Skip-able Automated Steps– Users are now able to use a “skip button” to sidestep any automated process steps (e.g., “Do you want to create a contact?”) not specifically required for the task at hand.

      MyPOV – Giving users more pre-configured options to control integration flows makes Informatica Cloud more attractive as the LOB tool of choice when it comes to integration.

      Cloud TDM Enhancements: Safeguarding Sensitive Data in Cloud Application Sandboxes
      Earlier this year, Informatica introduced a unique Informatica Cloud Data Masking service that reduces the risk of data breaches during application development and testing. Informatica Cloud Winter 2014 delivers enhancements in this increasingly important arena to help make development on the platform even more productive, including:
      • Enabling developers to quickly select and mask related Salesforce objects and move them into the Salesforce sandbox for testing. 
      MyPOV – Kudos to Informatica for making a key cloud problem easier. Masking of data for test and feasibility reasons is a painful and burdensome process that will certainly be welcomed by organizations seeking cloud projects.  

      Cloud Data Quality Services: Control the Accuracy, Consistency and Completeness of DataInformatica Cloud Data Quality Services enable organizations to take control of the accuracy, consistency and completeness of their data-driven operations with a low-cost, per-transaction subscription. These SOAP-based web services are easily consumed by any analytical or operational applications, or used as discreet data quality rules within any business process. Also pre-defined data quality rules standardize, cleanse, validate and enrich data for all industries.  In addition, specific data quality rules for the healthcare and financial services verticals are also included in this release.

      MyPOV – For Informatica to expand on their existing MDM and data quality tools is a good move. The absence of ‘user’ in the above press release paragraph maybe an omission – or otherwise hinting for a more complex skill set to be required to leverage these services. More details will be needed.


      A substantial release for Informatica, that propels their offering further as one of the leading cloud integration vendors. Focussing on the LOB and making it easier to integrate between public cloud and on premise is a fast growing market demand that Informatica is positioning itself well for. 

      Whichever vendor will be able to simplify this complex process enough to empower a non technical LOB user – will be on the path for victory for cloud integration. 

      0 0

      The AWS re:invent conference is in full swing in Las Vegas, attended by about 9000 clouderati, so it's time to look at the key statements and our takeaways of day 1 are.

      The usual conference statistics first - almost 9000 attendees from 97 countries, evenly split across startups, SMBs, and large enterprises. Re:invent (going forward for simplicity sake spelled reinvent) is an educational event with more than 175 sessions, often run by customers and partners. And with 196 exhibitors this is a proof of a thriving AWS ecosystem at hand. 

      AWS has come a long way

      Andy Jassy, SVP of AWS, kicked off with walking the audience through what AWS has achieved in the little more than 10 years it is around - becoming avalable for customers for about 7 and a half years now - with the first launch of S3 (it was March 14th 2006). 

      Jassy - in our view rightfully - can be proud about the breadth and depth of the offering. 

      Comparing my notes with November 2012 reinvent though, I was surprised to see that AWS still has the 9 data centers like back then - but has added one more availability zone (now 26) and and 4 Edge locations (now 42). While competitors are adding data centers (e.g. Oracle this week) or moving data centers (e.g. IBM moving to SoftLayer data centers) - the AWS topology seems to be stable - a sign of the maturity of coverage the division has achieved. 

      And AWS always likes to talk about how much capacity it adds on a daily basis and what size of Amazon business that would support. And that KPI has moved from an Amazon 5.2B business to an Amazon 7B business in 2013, a more than 30% increase year over year. 

      Customer growth is equally impressive and Jassy not surprisingly pointed out the progress in the public sector space. AWS never gets tired to stress that it cares for security and given the enhanced security requirements of the public sector that AWS fulfills - all customers benefit from the progress made there. More on that below.       

      Marketplaces rock

      The AWS Marketplace - only shortly introduced - seems to be doing very well these days - with the number of products up 71% and 1100 products listed. A good proof point of a mature platform. 

      But then with the AWS business not publishing any numbers overall and for the marketplace in specific - it all remains speculation how much revenue flows through this market place. This is an area the division is well advised to think about publishing revenue numbers. Or vague indications - like e.g. the marketplace is as big as Amazon was in year so and so. But still going from a few to thousands in 18 months is a great achievement and an indication for the size of the AWS ecosystem.

      Security, Security

      And of course the fear of the cloud is still present for many enterprises today - and with that remains a key concern that AWS is addressing. On the one side it's good to see Jassy acknowledging security as concern number 1 for AWS, on the flip side it would be good to see the market maturing and graduating from that fear.

      But that's not AWS fault - and AWS works on making it less of a concern with the new AWS Cloudtrail feature that was announced today. And Cloudtrail gives customers insight into what is happening in their AWS instance, by recording the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service. The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

      The good news is how AWS integrates withre-uses it's own technology - the log files get dumped into S3. Equally AWS gets kudos from us for allowing multiple partners, including Splunk, to offer integrated solutions to analyze CloudTrail log files. These solutions include features like change tracking, troubleshooting, and security analysis.

      As with any new service it does not cover everything and is not available everywhere. AWS Cloudtrail is no exception with only 8 services support (key ones though like EC2, EBS and RDS) and 2 regions live (US East and West (Oregon)). 

      Our largest concern in this early stage is, that logs will only be available around 15 minutes after the event happening. That is certainly fine for most events - but for some critical ones a lot of damage and harm can be done during that time. We think AWS has and will address this - maybe sooner than later. 

      Philosophy Part 1 - The AWS pricing flywheel

      Next Jassy gave some insight on what is the dynamics - he called it a flywheel - behind the numerous price reductions that has AWS has seen. And the wheel starts with external infrastructure innovation that drives to lower infrastructure costs, that allow AWS to reduce prices, that makes AWS attractive to more customers, which then use AWS more, that then needs to buy more infrastructure, enabling economies of scale and the whole process repeats.

      And we certainly believe in the economies of scale, especially after fellow analyst colleagues from Gartner have published, that AWS has more than 5 times the compute capacity of the competition combined. 

      Along the same lines Jassy repeated that famous Bezos quote that he yet has to meet the customer that wants to pay more. And according to Jassy AWS listens to customers but also does not want them to spend money unnecessarily - so AWS monitors system usage and has so far told customer over 700k times through its trusted advisor program that they can save money by stopping not used - but paid services. The savings the customer have combined achieved - following the advice - are 140M. Truly unique, impressive and we have to trust AWS that the number is valid.

      Philosophy Part 2 - Cloud means everybody can innovate

      In this interesting part of the keynote - Jassy broke a lance for employees at large corporations. Basically he said that these employees want to be and are innovative, too - but an outdated infrastructure holds them back. By using cloud infrastructure - even large corporations can become dynamic again. It can be ramped up quickly, with little investment and can disappear equally quickly when no longer needed. The cloud brings agility to the large enterprise is the new tagline. And these can now try innovative things quickly - and fail fast - with little investment. 

      All that of course gets more relevant with more disruptions happening to traditional businesses from cloud businesses - `the examples being (of course all AWS customers) Airbnb, Spotifiy, Dropbox, Instagram and Uber). 

      Next up was Jeff Smith, the CEO of Australian finance giant Suncorp, giving an example for such an approach.

      Philosophy Part 3 - Cloud Benefits

      Not sure why - but Jassy felt he needed to remind the audience of the basic cloud benefits. And that these - not surprisingly - cannot be leveraged in a private cloud. 

      But then Jassy acknowledged, that enterprises will not move overnight from on premise to the cloud. The world isn't black and white - but grey - and AWS helps customers to bridge the gap between on premise and the AWS public cloud. 

      Where AWS differentiates themselves from the competition (AWS calls them somewhat cute the old guard) is that AWS sees the on premise part ultimately close to 10% of overall load. The old guard in contrast sees public cloud only as 25% of the future load in an enterprise. 

      It's a nice way to differentiate - but we think ultimately customers will do what it's best - so probably the old guard pacman is a temporary state of the overall cloud vs on premise load balancing.

      And now something completely different - VDI

      Not a small surprise that AWS now wants to be in the virtual desktop business. Which is a business that has never really taken off in our view - despite a lot of promise. But maybe 2013 is the fall of VDI afterall with VMWare acquiring Desktone and now AWS launching its offering. 

      Not surprisingly Amazon right out of the gate says, that it will take it's stab from another direction, from the public cloud. So all the cloud benefits of being fully managed, being a pay as you go service, no CAPEX required, are given. And AWS can provide the license or it can be BYOL - which is a good start. Moreover the integration with Microsoft Active Directory makes it easy to deploy as users can keep their credentials. And AWS WorkSpaces supports PCs and Macs as well as iOS and Android tablets. Lastly AWS claims that by avoiding usual on premise cost such as sever hardware, storage and administration - it can provide VDI at half the cost than in premise VDI can be deployed. 

      Finally Jassy also unveiled the pricing - depending on hardware chosen and need for software - a VDI user will cost between $35 and $75 a month. A good price point, that certainly isn't - as we know AWS - its last one. 

      We had the chance to speak with executives from VMware in the aftermath of the keynote and they said that they can be equally - if not more - attractive on price - and with Desktone have a mature product and significant more functionality than AWS WorkSpaces. Certainly a valid point. But never underestimate AWS as a competitor. We asked from some reaction by the other incumbent, Citrix - but haven't heard back yet. 

      Cloud strategies as seen by AWS

      Next Jassy went over six different cloud deployment scenarios - as AWS sees them. The six are as folllows...
      • Use the cloud for development and test environments - examples were Lionsgate (SAP), Tokyo (Oracle) and Galata Chemicals (SAP again)
      • Use the cloud to build brand new Apps
      • Use the cloud to make existing on premise apps better (e.g. use Redhift for analysis)
      • Use the cloud to tie new cloud Apps back to existing on premise Apps
      • Use the cloud as destination where you applications to opportunistically - examples were Unilever (web sites) and Bristol-Myers Squibb (Simulations)
      • Use the cloud to move all on premise Apps to it, to go all in as AWS calls it (examples were Netflix and DowJones)

        More Mobile with Amazon AppStream 

        AWS has taken care of the  mobile market for a while, with offering out of the box mobile capabilities for developers who build mobile applications. And now AWS wants to help with tackling a key problem - the rendering of highly interactive and compute intensive applications across devices, as seen in gaming, engineering, events etc. with a new service called Amazon Appstream.

        Amazon AppStream is a flexible, low-latency service that lets customers stream resource intensive applications and games from the cloud. It deploys and renders applications on AWS infrastructure and streams the output to mass-market devices, such as personal computers, tablets, and mobile phones. Because the application is running in the cloud, it can scale to handle vast computational and storage needs, regardless of the devices customers are using and customers can choose to stream either all or parts of your application from the cloud. 

        The good news is, that AWS does have the computing resources in the cloud to make this happen - and then the connections to the endpoints where these apps will be consumed. So it was an almost natural step for AWS to provide the streaming support of resource intensive apps.

        Interesting enough, AWS developed its own protocol for the streaming with the Amazon AppStream STX protocol - not sure if this was necessary - look forward to learn more about it from AWS. 

        [Update from AWS on November 15th 2013 - more about the protocol inserted:]
        The Amazon AppStream STX Protocol is a proprietary protocol used to stream high quality application video over varying network conditions.  It streams video and audio encoded using standard H.264 encoding over the UDP transport layer. The protocol also captures user input and sends it over TCP back to the application being streamed from the cloud.  Network conditions are constantly measured during this process and information is sent back to the encoder on the server. The server dynamically responds by altering the video and audio encoding in real-time to produce a high quality stream for a wide variety of applications and network conditions.
        Unlike other remote-access services which provide access to the operating system, an end user uses your client software to connect directly to the AppStream application.  This cuts latency and improves performance.
        Well there we have it - not sure still why AWS had to build its own protocol - but at the end of the day it does not matter - as AWS is large enough to set their own standards.

        [Back with original post.]

        Next up was Tradeworx President Arzhang Kamarei - the company built software for the SEC. The most interesting takeaway was that with only 5 FTE Tradeworx manages to monitor 5% of the stock market. 

        And finally Keven Baillie of Atomic Fiction showed how AWS can help create CGI animations - a perfect example for an elastic showcase. Huge compute demand during production and rendering and then close to nothing when Atomic Fiction puts the whole project into... AWS Glacier. And the content and demo shown satisfied the geeky part of the audience, and probably the developer side in the audience, too - which did not have too much from the Jassy keynote .... which - no surprise - was all about the enterprise. 


        A good start of reinvent for AWS. Significant three product announcements - two that add value for two key areas for AWS - security and mobile. And a new market entry, for the first time AWS will reach end users directly - with AWS WorkSpaces. 

        It will be interesting to see how all three new offerings will do - with WorkSpaces coming into the most entrenched market space. And AWS clearly has the enterprise in mind - as all three new offerings are key to make enterprises more comfortable and more productive with AWS.

        What was interesting to note is how AWS uses its streaming dominance for further product innovation. This maybe only high level - but if you stream video content (of Netflix) to the volume of one third of US internet traffic to a number of different devices and form factors - then rendering game and app information for the same devices and streaming it (Amazon Appstream) is not a completely new challenge. When you than consider that running a user's desktop is a less challenging compute load than rendering high resolution apps and games - and then stream that - then the entry into VDI with AWS Workspaces - is not such a big step either. And both server compute loads for Amazon Appstream and Workspaces are ... highly elastic. And the delivery of streams is something AWS has already mastered.

        Looking forward to day 2...


        A collection of key tweets from the keynote can be found in Storify here

        0 0

        As it's getting tradition the day 2 keynote of the AWS reinvent conference was done by the AWS CTO, Werner Vogels. Vogels - or as referred to by his Twittter handle by a devoted developer community as @Werner - was once again at his best leading through the 100 minute keynote in front of 9000 clouderati.

        As with any day 2 keynote it was interesting to see how Vogels built on what we learnt from Jassi yesterday. And he kicked off consistently with pointing at AWS' pace of innovation - a slide that was consistently seen throughout the conference. And to give AWS credit - it has been innovative and adding features at a rapid path - even though it has been growing and getting more complex as a technology platform and organization. 

        With so far 243 functions and features delivered throughout 2013 AWS is at a record pace of innovation even for their own standard. And Vogels tied things back to the Amazon WorkSpaces and Amazon AppStream from yesterday's keynote as key product deliverables of 2013. It was good to see that Vogels acknowledged that the amount of new features can also be overwhelming, but this will not stop AWS as rapid delivery is in the DNA of the division.

        Philosophy Part 4 - Retro engineer starting with the customer

        (Part 1 - 3 are in yestereday's post here). As we have heard from Jassi (and every other session we attended at reinvent) AWS is customer focused and customers drive the innovation agenda. Vogels now went into how the development teams achieve this, which is almost like a retro engineering, starting with the customer requirements. 

        So Vogels said, that the AWS teams start with a pseudo press release - that describes what the new product / feature is all about. From there they write the FAQ on the product / feature. Next are the use cases and then the user documentation. The goal of the process is, that the desired features are delivered and not lost in the traditional, e.g. waterfall approach of product development. At Constellation we advise customer to start with the end in mind - this is pretty much the same philosophy applied to product development.

        Philosophy Part 5 - Keep it small 

        The other key success factor of the AWS development philosophy is to keep the teams small - Vogels introduced the 2 Pizza rule, which postulates that the team should be so small, that two pizze will feed it for dinner. Additionally the AWS teams work autonomously, own their product's road map and work decoupled from product launch schedules.

        The other noticeable practice is, that the product teams are in constant contact with customers, working with them on product direction and requirements. Speed is the most important factor for AWS, so the teams work autonomously next to each other. They release their products when they are ready - as fast as possible. The sooner customers have the product in their hands, the sooner AWS can start improving the product is a key benefit AWS goes after.

        In our view that is a laudable approach, but AWS needs to take account of the fact that their customers use multiple of their products, and often these need to work well together. With the AWS teams siloed and working on getting their products delivered as fast as possible, it is possible that the customer becomes the AWS system architect. A scenario we think AWS management will want to avoid.

        Vogels' showcase for this was the RDS team, that has continuously innovated based on customer requirements - and the key feature released at invent for RDS has now been - PostgreSQL. And with that we were at one of the two major product announcements of Day 2.

        The announcement drew spontaneous applause from the crowd and Vogels was visibly happy about the new product.

        And Vogels took a stab at the competition, too - the old guard, as AWS management describes them, as being technology and not customer driven, and that the guard adds technology as wished, which leads to unnecessary complexity vs. AWS that only adds features that are customer requested.

        Vogels even went so far to refer to lean principles in the AWS development process - ensured by only focussing on what the customers request. So AWS and customers form an epic collaboration relationship.

        Netflix awards winners

        Next it was the Netflix Chief Product Officer Neil Hunt with Chief Cloud Architect Adrian Cockcroft on stage, talking about how Netflix has been building its platform on top of AWS. And to Netflix credit it has contributed many of its platform components to open source.

        Netflix has realized that the developer community is key and created the NetflixOSS cloud prize, that awards a winner $10k in cash, $5k AWS credit and a trip to reinvent. And the 10 winners have truly build innovative software. Remarkably this was completely merit based and not political - as e.g. gentlemen from IBM and eucalyptus won 2 of the 10 prices.

        Philosophy Part 6 - The Power of Innovation

        Vogels made clear that all innovations that AWS provides are there to be around forever. They can't be  lost and need to be maintained. And Vogels postulated 5 principles around which AWS Innovation anchors:
        1. Performance
        2. Security (interesting that was #1 for Jassi yesterday)
        3. Reliability 
        4. Cost (noone has ever said I wish AWS would be a little more expensive)
        5. Scale

        And Vogels believes that if AWS works hard on all these 5 dimensions, then AWS customers will do well. And the rest of the keynote was structured along these 5 principles.

        It is all about IO, stupid (or performance)

        Interestingly Vogels then mentioned, that it is all around storing and serving the data of the AWS applications that matters to customers, and with that it matters to the division. And for storage the most important KPI is IO performance , and IO needs to be consistent. He then quoted the famoust statement that disks are becoming the new tape. But random IO makes it very hard to get consistent performance out of these systems. So AWS is moving to SSD to provide consistent, random IO.

        Instagram is the example that by moving to SSD they were able to move data 20 times faster between middle tiers and backend servers. So now AWS uses SSD, too - and announced the new I2 instances, that on the lower specs are cheaper than the  H1 (those gave 120k IOPS) instances.

        Not surprisingly AWS uses these instances themselves, and the example of Vogels was to illustrate consistent performance with was of course DynamoDB. Consequently we saw a flat performance chart for average DynamoDB latency. And to aid performance consistency further, AWS announced the avalaibility of secondary indexes on a global level for DynamoDB.

        And then it was to Parse CEO and co-founder Ilya Sukher to provide a showcase for consistent performance. Parse markets itself as a cloud on its own - with key mobile, push, storage and analytical capabilities.  Sukhar showed lines of ObjectiveC code - first code seen this reinvent - certainly welcome by the audience. The business event that created the showcase for AWS that Parse represents, happened when Parse went on Facebook and its apps volume jumped from a few hundred to 160k.

        And AWS also helped Parse to make MongoDB performance consistent using PIOPS - which dropped the base line latency to half, spikes disappeared and overall Parse is now scaling much better as memory warm up time has been cut down by 80%. And finally one of the main benefits for Parse was, that its developers could focus completely on the customer and did not have to worry about infrastructure. And lastly Sukher mentioned the peace of mind for him as a CEO - knowing that the infrastructure can scale with AWS and is no longer something he has to worry about.

        Philosophy Part 6 - Flip the Security Model

        In the past it was up to customers to increase security on their data by e.g. turning on encryption. Vogels wants to turn this around and said that in the near future AWS customers will have to explicitly request not to have their data e.g. encrypted. Encryption and other security measures will be the new normal - getting less will be something customers will have to request. Vogels example was that a few years ago there was the discussion that https would be to expensive - but today it's standard. Along the same lines he thinks that security standards that are under cost and performance scrutiny today will be standard sooner than later. And AWS maybe an active change agent in this process.

        Specifically for AWS this means that IAM and IAM roles get more important. And it has been achieved pretty well for S3 said Vogels. But how to do this in real databases - which data is accessible for who remained a challenge, for that fine grained access control of DynamoDB is the showcase. For instance mobile applications can access DynamoDB directly - no longer requiring a separation of customers by proxies needed. And then there is now support for SAML 2.0. Only now - which surprised me a bit - but better late than never.

        Along these lines Redshift gets encrypted and thanks to a dual key system only the customer and not AWS (or other partners) have access to the encrypted data.


        And of course reliability is achieved by the availability zones. And AWS sees the usage maturing, with customers even using different regions for their availability zones. The Japanese earthquakes and hurricane Sandy are the recent events that make businesses consider moving availability zones across regions.

        And with AWS adding snapshot copy for Redshift, customers get the capability to secure their data warehouse easily across regions. And even more importantly, RDS will allow cross region replicas. This makes migration between regions easier by allowing to spread copies across regions. This gives customers many options for backup - starting from simple backup to a pilot light approach, to a warm stand by solution and ultimately to a multi-site solution like the one Netflix is pursuing.


        As storage and database usage are a key cost driver for AWS customers, Vogels went over the tiered capabilities of AWS for both storage and IO.

        Equally compute needs to be part of the cost optimization component - and there Vogels stresses how important the spot market is. Customers that are shrewdly taking advantage of the spot market are hungama for transcoding, Pinterest that manages front end operation and was able to reduce cost by 75% and finally vimeo, where the company differs between free and paid accounts. And vimeo free accounts are transposed in the spot market, and paid accounts are transposed in dedicated instances. And the final example was Cycle Computing - which can use all of AWS compute capacity - and they procured 1.21 PFlops with over 16k instances and 264 years of compute to calculate compound formulas.

        And the stunning revelation by Vogels in this case was, that the cost for running that massive compute was $33k - versus procuring the compute in a traditional on premise delivery, which would have cost the client $68M.

        And Vogels announced also the G2 instances that leverage the NVIDIA 'Kepler' GPUs, have 1536 CUDA cores and are great at encoding and streaming video.

        Vogels confirmed that these G2 instances are the backbone for the Amazon AppStream product, that Jassy announced yesterday. But AWS does not stop there - it also announced a new flagship compute instance, the C3. It runs Ivy Bridge and is an SSD based platform.

        And AWS would not be AWS, if it would not offer a range of different configuration options.


        The showcase for scale was the Dutch company We Transfer, that transfers artist wall papers and other attachments that are too large to send via email. And the success of the company is creating a massive scaling problem as a week in 2013 is the same amount of transfers for a month in 2012. And needless to say - they solved that with AWS.

        Next up was Mike Curtis, VP of Engineering of Airbnb. Not surprisingly Airbnb is experiencing massive subscriber growth, reaching 4M subscribers in January 2013. And about 150k people are Airbnb hosts at any given night. Again AWS solved the scalability problems for the company. Even  more convincing Curtis said, that anytime AWS has something that they could use - Airbnb uses it and does not look further.

        Airbnb went from 24 EC instances in 2010 to over 1000 in 2013. Photos are key for guests as they pick their host property through these - and the usage of photos has gone from 294GB in 2010 to 50TB in 2013.

        Most amazingly Airbnb can run all this infrastructure with a 5 FTE operations team.

        AWS and the Internet of Things

        Next Vogels went over all the many applications of sensor data and real world machines that AWS is enabling customers to work in. Starting with the Nest thermostat, Illumina dumping sequencing data into S3, Tata Motors instrumenting trucks and to predict preventive maintenance, over collaborating with GE on the inudstrial cloud, to helping catching sensor device data from smartphones with startups like e.g. Human, that motivates to be active for 30 minutes a day - it's all happening with AWS.

        The combination of the offline with the  online world is the common thread of these applications said Vogels. And then he got a little geeky and social showing a life logging application coming from Sweden - presenting his narrative of his last 72 hours in Las Vegas - as the device takes a picture every x minutes.

        The showcase for massive real world to AWS connection then was dropcam, with their CEO and co-founder Gret Duffy on stage. And Duffy made the great point that it was not about the hardware, but the software - so dropcam did not have to build the camera - but a camera web service. And interesting dropcam is the largest video service on the web - with more data uploads per minute than Youtube.

        And as expected - when moving to AWS usage started to go massively up - main reason was the free inbound of data into AWS - which was a key reason for dropcam to move to AWS. Then Duffy walked the audience through the dropcam architecture - as expected compression starts on the camera and dropcam makes uses of Scala, Python and PostgreSQL heavily using DynamoDB.

        Where dropcam gets really powerful - and a little bit of a concern from a privacy persective (1984 anyone?), too - is that they are enabling real time video analysis - of course using AWS EC2 to process the massive video load. 

        Then it was back to Vogels to go over some interesting AWS built products that connect the real world with AWS. And the examples came from the transport world with Moovit and One bus away

        One more was mBuilder, that puts sensors into construction sites to monitor e.g. temperaturs and other sensrs that are put on construction site - and then their data gets streamed back to AWS and allow efficient managing of the construction site.

        All that data creates logging challenge and data storage challenging - as you cannot afford loss of data. Vogels quoted Netflix's Cockcroft, that Netflix is actually a log generating application  that just happens to stream movies. And that queued up the net topic - and last but key announcement - around realtime.

        AWS gets serious with realtime

        Vogels did a good job about talking that it matters less and less what happened yesterday or even 15 minutes ago (AWS CloutTrail maybe?) and it all comes back to find out what happens right now, to drive real time insights. 

        Next he went in a smart way through the deficiencies of the current technology at hand with Hadoop, Storm, Kafka, AMPQ et al - that all work - but are hard to maintain at scale and tough to configure. 

        The show case was echo that helps resolving URLs and detect spam with massive inputs of 1000 average and 13k peak and outputs of 1100 average and 7000 peak per second. 

        But AWS wants to make it easier - and launches Kinesis (Greek - a movement as reaction to a stimulus):

        And of course AWS makes it massively scalable, allowing to processs TBs, while staying reliable and most importantly - being simple to use. In a later session we learnt that AWS provides a Java based client library - called KCL - that makes it easy to administer and ramp up and down these complex system.

        As to be expected, Kinesis ramps up and down gracefully, being able to increment the system throughput on a 1 MB/s in and 2 MB/s out base and a unit (later we learnt they are shards) being able to process 1000 transactions per second. 

        Streams into Kinesis can be scaled and so can backend Kinesis applications. What is most important is, that AWS plays more and more on the synergistic platform field with building and integrating new offerings like Kinesis into the overall AWS platform. So Kinesis can leverage DynamoDB, RedShift, Storm, S3, EMR and RDS - and Kinesis Apps can be deployed to EC2, Autoscale is enabled and Kinesis streams can even be combined.

        Kinesis was the only Amazon product demoed in the keynote - showing how important AWS sees this new capability. And the demo was really impressive - done by AWS's Khawaya Shams - analysing the Twitter firehose, making each tweet persistent and analyzing it for content and popularity perspective. Persistence was achieved by DynamoDB and analysis done in Redshift. The demo was all about looking up Mars as a planet, but Mars is a popular term on Twitter not as a planet but as the last name of an artist whose first name is ... Bruno. A funny way to demo the very powerful Kinesis capabilities.

        Next Shams showed where people are tweeting from about Mars and of course this was a good demo of the new PostgreSQL ability with its inherent GIS capabilities, showing a US map with the tweet activity.

        But the most important takeaway from the Kinesis demo was all about being able to build this application in around 5 days (of course 2 very smart people involved) - and the cost to run it in production was.... 5$.

        And then it was up to Vogels to close out the keynote - going over all the announcements quickly - PostgreSQL support in RDS, Amazon Kinesis, cross region replicas, Redshift snapshot copies, global secondary indexes and the new C3 and I2 instances.

        As it had to be - the most applause was garnered by Vogels announcing Deadmau5 as musical guest for the AWS party.


        AWS keeps innovating at a very fast pace - no doubt. The good news - and that was abundantly made clear - AWS does so with the customer in mind. And it keeps providing new value from innovation to all its constituents - AWS CloudTrail caters to security concerns shared across the client side, the PostgreSQL support in RDS caters to developers, Amazon AppStream is geared towards developers building compute intensive apps and Amazon Kinesis towards real time analysis needs of both enterprises and ISVs. Only Amazon WorkSpaces is a new market entry. 

        All that happens while Amazon makes the backbone of its infrastructure stronger. We noted in our takeaways from yesterday, that AWS has not added any regions - but there must have been a ton of fiber being put in these existing data centers. And you need some physical stability on the data center side to achieve this - as we think these are the backbone to new services such as the cross region reads and replicas, RedShift snapshot copy and global secondary indexes. Even if the competition wants these - it just takes time on the calendar to put all that fiber in the ground and connected to their data centers to replicate similar features on their infrastructure.

        Lastly AWS was very proud of their new hardware instances, and while the new C3 and I2 instances are very powerful - this is the area where AWS is less strong in regards of the competition. But they are probably fine with it - as its not high end hardware that wins the cloud wars - but value added capabilities and real word network speeds at attractive prices.

        Overall we think AWS has moved the yardstick further out for the competition to catch up - it will be interesting to see how the real word actions of the usual suspects will materialize as a reaction to this years reinvent announcements. 

        Lastly the new Amazon Kinesis offering is the most exciting product in our view - as it moves the realization of the real world much closer into software, at a fraction of the cost previously imaginable. Can't wait to see Kinesis apps being build. 

        A collection of key tweets from the keynote can be found in Storify here.

        And you can watch the replay of the keynote here:

        0 0

        The yearly Dreamforce conference kicked off today in full swing, well it hasn't even kicked off as the official start is only tomorrow. But we had a chance to attend the partner keynote - which was so full, attendees had to wait outside before it could get started.

        Platform, platform and platform

        It seems to be the message these days - see our takeaways from SAP TechEd and Amazon AWS reinvent - and of course now Salesforce has the same message. To be fair - Salesforce has had since a while - but the new messaging and positioning of the company around Salesforce1 the platform gets more and more important (again). If you have missed it - here is the latest pitch video on youtube - leaked before the Tuesday keynote:

        The ecosystem matters

        And consequentially in the partner keynote this morning at dreamforce in San Francisco - apart from more a vertical focus, the ecosystem mattered. So it was time for the partner and plattform treams to show the successes of ISVs who have build on the platform. And it was a well delivered and smooth pitch - build on and be successul. Not only with the platform - but also with supporting programs - e.g. the launch of the AppExchange Accelerate program - that provides help to startups and even more mature ISVs to be successful in every stage of their growth.

        But the best reference turned out to be recently gone public CRM vendor Veeva - which has built its life science focused CRM suite on the platform. To a certain point Veeva is for Salesforce what NetFlix is for AWS - a competitor building on the platform. 

        Nothing gives a platform vendor more credibility than competitors building on your platform - it means that you have made a strong enough commitment to the platform to attract ISVs and dissuade competitive concerns.

        But then... Financialforce acquires...

        So right after we learnt how well Salesforce has been growing the ecosystems - over 1000 partners, over 2 million downloads in 2013 - we learn that one of the major ecosystems players - - has acquired vendors Vana Workforce and (the assets of) Less Software. Certainly a good move by - pursuing a complete ERP strategy - but what does it mean for the platform or ecosystem?

        Here are a few perceptions Salesforce does not want to happen around
        • is a vehicle to be acquired by larger vendors or Salesforce itself. - This would imply no lasting stay on the platform
        • Or is a vehicle to get easier funding fromt he VC community. - This maybe risky as funding may not be as abundant and not available.
        • Or is a great platform to go public - see Veevo. - This maybe risky as one cannot predict the IPO market.
        • Salesforce remains a key application vendor by itself, who also has an investment in many players like in this case interestingly in FinancialForce. So Salesforce can play many roles in the ecosystem - but needs to remain above the distrust line. 
        On the flipside this could be just a blimp on the radar screen - as long as Salesforce manages to show growth in numbers of vendors and partners building on The next 12 months will tell. 


        Salesforce has done a very good job promoting its platform and making it attractive to ISVs. Other vendors in the business should take note. 

        But then the ecosystems is still relative small - so acquisitions like the ones from today will be watched with a weary eye for players inside and outside the ecosystem. 

        0 0

        Once a year IBM invites to its global analyst meeting in Stamford to brief on the latest of their software products. The event is very valuable to the analysts as the IBM software group is so wide spread, includes so many products – that’s it tough to get all the messages and products aligned throughout the year. 

        With IBM not having a cross product user conference (like Microsoft but unlike e.g. Oracle and SAP), which typically form a natural synch point -  the meeting is even more important.

        The conference was well attended – I estimate over 200 analysts were in the audience – and IBM did a very good job mixing general sessions, round table and mini tent presentations with one to ones and social interactions. I’d say the bar is set high for similar events of other vendors.

        Let me exclude the SoftLayer aspect from this post – it’s so fundamental it warrants a post by itself that I will blog hopefully soon. But that little ahead: In my view SoftLayer is IBM’s chance to transform itself into a very different player in the enterprise software market.

        IBM is serious about developers and therefore open standards

        I had a chance to attend two sessions lead by Danny Sabbah – focusing on the overall developer value proposition of IBM. It looks like the days were IBM could rely on the executive mandate to use the Rational Suite as development tool - are really gone for good. IBM realizes – and that was underlined at times with some remarkable passion – that it does and will keep supporting standards, contribute to open source and make sure that developer tools work in a wider sense - not just with the IBM technology stack – but with all relevant stacks out there.

        The investment and commitment to Cloud Foundry has to be seen in that light. And possibly it taught IBM execs a lesson, as the story was, that the Cloud Foundry direction was more or less taken by the in house developers. It then passed the strategic test in the sense that Cloud Foundry is committed to open source and standards – so the step to make all the official moves that IBM has done with Pivotal / Cloud Foundry became a small and easy – but an important one.

        The investment and commitment to OpenStack has to be seen from the same perspective. And IBM is actively contributing to OpenStack with close to 400 developers and almost 1600 commits and over 70 blueprints.

        Ultimately these moves are to result in a reduction of lock-in and an increase of portability of software constructs – something we hear a lot these days – but it looks like IBM is gaining a lot of credibility in these regards.

        The PaaS future is … BlueMix

        But to make developers productive, you need development environments and infrastructure that these days comes along as PaaS platform. IBM        has been seriously working in this area - the new offering is code named (or is it the  product name?) BlueMix and is in alpha. Beta is coming soon – very much look forward to see it and learn more.

        And the SaaS Future is …  APIs

        It was very interesting to see, that IBM is actively looking into exposing the APIs of its more than 100 SaaS applications to developers. This would make the IBM platform the richest API economy out there. Many questions in regards of consumability, granularity, licensing etc need to be addressed – but the vision and direction is compelling. It also rides on a healthy dose of attractiveness as it cannot be easily copied by competitors – except maybe by Oracle…

        Watson is … alive and well

        I was positively surprised how realistically IBM sees the status of Watson. Maybe some executives even regret the hype once created around the jeopardy show. But today IBM is very realistic that the advisor products are in an early phase – while capable enough to move the needle for certain businesses already. With a vertical focus on financial services, healthcare and travel there is a good (and needed) focus on specific industry problems.

        And lastly the decision to create an ecosystem for partners is the right move to not limit the very good Watson DNA to IBM's ideas only – but open it up for the ideas of a large partner community. It was good to see how structured IBM is dealing with partners in that ecosystem and impressive that IBM choose a self service based partner engagement system. Not sure if driven by lack of FTE budget and / or the expectation of an overwhelming partner interest – it’s always good to be able to scale with software.

        HCM + Social + Analytics = a high potential formula

        With IBM combining the Kenexa offering with its social capabilities, it has created a unique setup to differentiate the Kenexa offerings in the HCM market.  Paired with the analyical DNA of Kenexa and the overall ambitious analytics agenda at IBM, I hope for more attractive and differentiating analytical offers to come.

        With social capabilities getting baked in the overall platform, IBM has a similar platform play advantage like Oracle. And it was good to see that the involved executives around Alistair Rennie realize that opportunity. We will see in 2014 how well that can be leveraged.

        With an alternate view it will be also important for IBM to decide, if it wants to stay only in the talent management race (see the race definition in our post from HRTech here) – or if it wants to join the big race with some core HR and payroll capabilities. That would most likely require some acquisition though.

        And IBM is serious... about design

        As we learnt a few weeks ago, IBM opened its new innovation center in Austin, with a focus on design. Headed by Phil Gilbert, IBM is aggressively hiring and relocating talent to the Austin facility. Gilbert managed to get the assembled analyst draw to vases - and then apply design thinking on how flowers can change living in a home. Great showcase. With many enterprise software vendors emphasizing design thinking (SAP re-emphasizing it with its Fiori designs, Infor with its agency etc) - IBM maybe even late to the party. But certainly IBM products will benefit from the approach and we look forward to see the first results soon. 

        What I missed

        Despite the 1.5 days and a packed agenda, I did not have enough time to look more in detail on the IBM BigData and mobile plans and capabilities - checkout my colleagues posts, who attended these sessions. Or search for #SWGAI and you will find a few thousand tweets....


        A very informative event that gave a good glimpse inside the vast IBM software product landscape. The good news is, that it is more or less coming all together in the cloud and IBM has some smart (pun intended) differentiators it can play to position future products very well in the market. There certainly is a lot of work ahead – and IBM needs to be careful to not disrupt this process by too many acquisitions… but that has not concerned Steve Mills and team too much in the past – so we expect the same for 2014 – and some very exciting products are in the oven.


        Kudos to IBM for creating and updating their own Storify - you can find it here.

        And my Dozen - serious and some not so serious takeaways on Storify here.

        0 0

        So the 2013 edition of Dreamforce is over and it's time to look at the key takeaways from this 5 day mega event. Needless to say that everything Dreamforce is BIG - attendees numbers, space used, concerts held, bands playing, prizes (1 M$!) etc - which makes Dreamforce the dream event for any CMO. I am sure any marketing professional not working at Salesforce would wish they could spend similar on their customer event. And Salesforce is under pressure to keep outdoing itself - as the hype is a substantial ingredient with Dreamforce.

        But let's cut through the hype and let's look what ultimately matters, the technology behind the announcements.

        Despite A Litany Of Annual Trendy Themes, Customer and Developers Still Matter Most

        At the end of the day, Salesforce remains a CRM company.  However, it's good to see, that salesforce remains focused on customers. Customer orientation has been important at Dreamforce. 

        Two years ago it was all about consumers  and their journey towards social and why companies needed to respond by turning themselves into social enterprises. Salesforce was there to help them  (e.g. listening to customers with the Radian6 acquisition). 

        Last year the message was around becoming a customer company - and in order to become that, Salesforce needed to get stronger on marketing segmentation and automation and acquired Exact Target. 

        Now it's all about enabling the combination of social, marketing and connected devices, that Salesforce labelled as the internet of customers. But for that you need a new platform, and there we have this years key announcement with Salesforce1.

        Equally present was Salesforce's heavy courting of developers. Every keynote and briefing at Dreamforce began by mentioning the importance of the developer and how Salesforce wants to become a (even more) developer friendly vendor. We understand Salesforce fully here, but wonder if the average Dreamforce atttendee cares to worry about how happy its developers are.  However, most Salesforce customers don't have developers on their payroll. 

        More importantly, Salesforce cares about administrators and the simplification of their daily life by maintaining and configuring Salesforce products. Consequently, this is how Salesforce has architected Salesforce1 (more below) - system administrators are going to be key for Salesforce1's success. 

        Giving the Salesforce administrator the room to administer and enable the mobile platform is a key move - and equally a nice career addition. 

        Salesforce1 demystified

        If you take the Dreamforce hype away - Salesforce1 is basically the delivery of the product formerly known as Touch, in a certainly revamped and improved form. Under the common mobile first banner, Salesforce is now delivering a unified user experience on iOS and Android devices. Customers no longer need to switch between your e.g. classic salesforce mobile app and the mobile Chatter app. And had you deployed Touch - no more switching to these apps on mobile devices, too. 

        So certainly a welcome and good move by Salesforce. And definitively a  major feat on the engineering side to bring all these platforms together and being able to extend and Heroku built apps on the Salesforce1 supported devices. 

        And Salesforce equally deserves credit to make Salesforce1 a platform - where 3rd party content and applications can run on. Certainly the right path to success if you want to own the mobile user experience and make that more user friendly. Being able to deploy the Salesforce1 mobile applications consistently and declaratively is a pretty unique feature. 

        And could it be more than that? Let's look what it could be... 

        Salesforce1's big potential

        To make its apps better consumable for mobile - Salesforce had to make its APIs more granular. Salesforce claims a tenfold increase in number of APIs.   And with that Salesforce1 apps become more nimble and more powerful to deploy. And it becomes easier for ISVs to add their own APIs and build highly differentiated mobile apps. 

        If Salesforce takes all this and makes it the backbone for its next generation applications - those that run in the desktop browsers - then we would see a significant opportunity here. Salesforce would be able to re-invent not only the mobile experience - but provide a platform for highly consumable and compose-able business applications - across different delivery channels. 

        For that Salesforce will have to strengthen its application server capabilities - or a similar function of those. And whatever that platform will be - Salesforce1 - it will have to be a good integration platform for 3rd party content. The good news here is, that Salesforce has done a lot here already - to both allow ISVs and customers to access 3rd party content in the backend.

        Advice for Salesforce customers

        This is good news for Salesforce customers - mobile users get more modern application to work with. The backend gets better at getting 3rd party data accessed and exposed on the mobile side, certainly a good capability. But analyse how much of your CRM processes can be run on a mobile platform, if there are gaps, press Salesforce on completing those soon. And evaluate alternative platforms for your Internet of Things plans - before you commit to Salesforce for them. Your CRM application vendor is a natural proxy for them - but so is your ERP vendor. 

        Advice for Salesforce partners

        This is also good news for Salesforce partners - it will be easier to build more powerful mobile apps. Look at the gaps in the Salesforce automation portfolio both from a horizontal and vertical perspective to chart your product plans. For services partners we see the mobile business shrinking, as Salesforce smartly has put more productivity in the admin console. With the declarative capabilities of Salesforce1, the market of re-building mobile apps out of a need created by the technical disconnect of the browser and the mobile platform will keep shrinking.

        Advice for Salesforce competitors

        The days are over when competitors could poke at Salesforce about pitching a social enterprise - as long as you used two separate apps on the mobile device. Salesforce raised the stakes with Salesforce1 with its ability to push functionality including customizations consistently to mobile platforms, which will become quickly a table stake. Focus on the more pedestrian browser UI of Salesforce instead. And if the Salesforce pitch on the internet of customers will show signs of success - then you better have a strategy on how to embrace the Internet of Things soon, too.

        Advice for Salesforce

        A good move by Salesforce, that deserves credit for a much improved mobile experience coupled with a solid platform approach with a good eye on developing the vital ecosystem for platform success. Salesforce1 will have to grow quickly to become the overall Salesforce platform - for delivery across all user interaction channels. And it will have to beef up on the integration side quickly, to really become a contender in the Internet of Things arena. Another missing key ingredient for overall success are BigData and (true) Analytics capabilities. Topics for next year and Dreamforce 2014?


        Another Dreamforce with a lot of superlatives, a great conference for the Salesforce ecosystem. Great presentations, great speakers and a lot of pressure for Salesforce to top that next year - once again. I am certain Salesforce will succeed with that. 

        How it will succeed on the Salesforce1 side will be something we will keenly watch - for now its a much improved mobile experience and platform - that has the potential to become much more - benefiting customers, partners and Salesforce. The next 12 months will tell... 

        0 0

        It does not happen often - or ever - did it? - that two major cloud players announce general availability (GA) or a new release of their respective cloud platforms. So HP announced version 13.5 of it's public cloud and Google announced the GA of Google Compute Engine (GCE).

        Some common trends emerge

        There are some common trends that both the Google and HP announcement show:
        • Prices keep falling - Not surprisingly - both vendors addressed reduction in pricing - following Amazon's AWS lead. It's almost becoming a tradition that cloud announcements are coupled with price reductions, reflecting the lower cost of running hardware environments, progress in automation and economies of scale. Notable on the Google side is the change in pricing model, too - which takes away the cost for Iops - and makes the cost of persistent disks more calculable - a move many users had been asking for.

        • Memory matters - It's yet another indication that it's memory hungry applications that are moving to the public cloud - see BigData - but stacks overall get more RAM intensive. So vendors react with larger memory instances - HP puts in larger standards (3XL - 5XL) and more memory (L - 2XL - 16 cores and up to 120 GB of RAM) and Google launches three new instance types with up to 16 cores and 104 GB of RAM (limited preview only though).

        • Networks matter - HP makes the case pretty clearly - with claiming faster and larger uploads - you sure want to get the data in into the cloud. Google touches networks implicitly by taking Iops out of the pricing scheme - see before. Both moves need investments in the network infrastructure that both companies are obviously undertaking.

        • Disk becomes the migration tool - And despite all the push for in memory - its the good old spinning rust that makes the cloud go round. You can now keep Google Persistent Disk volumes around even through planned maintenance and volumes can be mounted and unmounted from one VM to another. On the HP side Block Storage now gets SLA service coverage with the move to GA, and storage containers can be synched across regions.
        ... and then common trends end - so let's look at the respective differentiators:

        HP's focus is the hybrid enterprise cloud load

        As to be expected HP beefed up its console capabilities - uptaking the OpenStack Horizon project. And the console matters to HP given the hybrid nature of most of its existing and expected cloud deployments. Given the nature of consoles in an hybrid deployment, the importance of user roles is likewise of key relevance and HP added a number of identity roles to make administration easier. 

        Not surprisingly HP also focuses on software defined networking (SDN) advancements for HP Public Cloud - that are compelling for its user base and help administering and securing cloud loads of different sensitivity. And lastly management of availability zones is improved - surely more a necessity for today's cloud deployments.

        Google's focus remains the high end public cloud load

        With GA Google is loosening up it's previously exclusive grip to Debian and Centos Linux distributions. Google adds support for SELinux and CoreOS and expands kernel support for Docker, FOG, xfs and aufs. Interesting enough support for SUSE, FreeBSD and RHEL (in limited preview) is also being announced. This addresses previous criticism of GCE being to Google centric. 

        And Google keeps honing around its (so far) unique capability of doing live migrations for virtual machines (and some VMWare based cloud providers offer the same as @cloudpundit correctly pointed me to). This certainly is a significant reduction of maintenance and administration needs over other public clouds. It will be interesting to see, if Google can make this stick as a differentiator over other clouds and force competitors to provide the same (e.g. VMWare is rumored to do work on this).

        Google has also managed to start up a partner program - something I somehow missed - and it has some good uptake - with notable mentions being Red Hat, Rightscale, Scalr and Suse. Definitively a sign that Google Cloud platform is getting traction in the cloud ecosystem.

        Advice for customers

        The cloud keeps getting better and better and customers need to look into cloud usage for their loads, may they be coming from traditional apps or from newly created apps. Given the focus on higher quality machines in the cloud, there is definitively a sign, that customers that do not have a public cloud strategy for e.g. BigData, risk to be left behind their competitors - always assuming these strategies will come to fruition. 

        Google and HP offer two very different IaaS offerings and are probably interchangeable only for some subset of cloud applications. Before making decisions consider the Constellation Research IaaS Trends report that can be found here

        Advice for competitors

        In the competition of high quality loads - make sure your hardware can match what the latest of Google and HP is (and certainly AWS, too). Refreshing and upscaling your hardware with higher specs enabling higher value cloud applications is a key strategy to keep prices up. 

        The noticeable absence of PaaS elements in the recent cloud announcements reflects the pioneering landgrab mode the market is in - but likewise offers also an opportunity of differentiation. Contrast the recent AWS, Google and HP announcements with Salesforce's Salesforce1 announcement and you can see the opportunity for more developer centric tools and benefits in the overall IaaS battle.

        Advice for Google

        Google certainly does well with the overdue GA of Google Compute Engine. One has to wonder what held the company back for so long - waiting on the flip side has probably not hurt Google's cloud business either. Opening to more Linux distributions and more kernels was certainly a key table-stake. Likewise Google deserves kudos for not only putting up a partner program but also being able to attract key cloud players as partners. 

        Google will now have to show, that it can cater to the enterprise needs and that it can attract significantly more load from ISVs and startups as well as end users.

        Advice for HP

        HP deserves kudos for coming out with a release and being very transparent about known issues. If they were too many issues as some reports suggest - is a good question. Only future can tell if the HP executives pressed the green release button too early - or not. The focus on software defined networking and related security issues will resound well with the target customer base. 

        HP is exposed to the whole OpenStack experience and as with every open source commitment there are pros and cons on a pretty wild ride. HP now needs to build more credibility around OpenStack and further solidify it offering.


        A good day for the cloud for which some common trends beyond price erosion appear - higher end instances are in demand, networking is (no surprise) crucial and disk is the migration utility. 

        Bottom line, Google and HP offer divergent and differentiated value propositions. HP is taking the OpenStack road to hybrid cloud, Google is firmly committed to own IP and a high end experience for deployments on its cloud. But the competition with AWS does not stand still - comparing these announcements vs what Amazon announced at reinvent - there remains a lot of catching up for Google, HP and the rest of the IaaS players. 

        0 0

        With SAP being a global company it has regional editions of the Sapphire and TechEd events. For anyone following the company it's always a key question if the follow up events to the traditionally leading North American event, are pure repeats - or if there are more or less fine tuned changes. 

        For this year's edition of the TechEd developer conference it was more or less same same between Las Vegas and Amsterdam - but the event in Bangalore was an event by itself. There is a lot to digest from this TechEd - that just started today - December 11th - and will go until December 13th.

        The big picture

        So let's start with what I thought was the most interesting and important part to see if it had changed from TechED in Las Vegas and Amsterdam - and the good news is - it has not changed, the picture is still very much the same:

        Wedged between HANA the database, running on the HANA cloud platform are a number of different applications - to which Fiori is the front end. And while this is easy to draw and compelling to explain - it has massive repercussions for SAP customers, SAP partners and SAP itself. More to come in a separate post, hopefully soon. 

        Takeaways from the press release

        So let's dissect and comment on the key pieces of the press release:

        Simplified Development Experience 
        To address the fast and agile nature of today’s market, SAP has launched a regional test drive of a new open development environment for SAP HANA. SAP River intends to provide a real-time development environment for rapidly building complete native applications powered by SAP HANA and enable increased developer productivity through a simpler development process, easily maintainable source code and a clear separation of intent and yet an optimized native execution on SAP HANA. 

        MyPOV - This was certainly the big news - SAP (once again) creates its own development environment and language with SAP River. For those watching SAP long enough, developer productivity was the key reason ABAP was created in the first place and kept in place (ironically it was Shai Aggassi at an event in Bangalore saying that ABAP would be around for a long time, because its so productive) till today. 
        We will have to see much more of River if that really merits the case - my general view (for now) is - the world (and developers) do not need another development language and environment.

        OpenUI5, Open Source Variant of the SAPUI5, for the Developer Community 
        SAP is contributing key portions of its SAPUI5 framework as open source code on the GitHub site under an Apache Version 2.0 license. As part of the effort, SAP has developed different themes, framework features and control libraries to help provide a consistent end-to-end experience and foster reuse and efficiency, for example, via responsive layouts. - See more at:

        MyPOV - This was widely popular by the audience and in social media. And in general it's positive when large tech vendors contribute to open source and put muscle and investment behind it. But usually they take a lot from open source and then contribute to the project with some fraction of investment. OpenStack is a prominent example for that. But in SAP's case with SAPUI5 - now OpenUI5 - SAP is the only contributor - and the question is, who will collaborate and invest into it and into which areas. If there was someone, SAP should have had them  on stage and in this press release. 
        Another potential red flag are the details on what SAP will keep and not contribute - something that wasn't communicated as I wrote this (yet).

        New Service Broker for Cloud Foundry to Connect to SAP HANA
         SAP announced the code contribution and availability of a Cloud Foundry service broker for SAP HANA. Developed in close association with Pivotal and now available as open source on GitHub, the service broker will allow any Cloud Foundry application to connect to and leverage the in-memory capabilities of SAP HANA. - 

        MyPOV - Long awaited and now it's there - and good move for HANA - the database. With the new River SAP is competing with Cloud Foundry as a platform for developers to build next generation applications on. I am sure SAP and Pivotal will separate that by the type of applications you can build with each platform - but the partnership with Pivotal becomes now more of a database partnership than anything else. 

        New Node.js Connector for SAP HANA 
         In November 2013 SAP contributed SAP HANA database client via GitHub site that enables developers to efficiently connect Node.js applications to SAP HANA. - See more at:

        My POV - This goes along the same lines as the previous paragraph on CloudFoundry - only here SAP does really good to open source in the sense that a tech giant contributes (and maintains presumably) an open source framework's usage (in this case the popular node.js framework) as part of a much larger open source project. Of course with the interest of more node.js projects uptaking HANA as the database - but that is a very legitimate motivation for SAP. Well done.

        New Beta Version for SAP PowerBuilder 
        A new closed beta version of the SAP PowerBuilder application development tool is intended to support new operating systems, databases and open standards and is planned for limited availability on December 15, 2013. SAP is committed to innovation without disruption and with the new beta release, beta customers can use their current investment in applications, knowledge and skills while they take advantage of new SAP innovations. 

        MyPOV - With the Sybase acquisition SAP also acquired one aging, but prominent application development platform, the leader of RAD race of the 90ies, PowerBuilder. It's good to see that SAP keeps investing into the product - as there are still a lot of clients out there running applications on PowerBuilder. If I understood Reh correctly, PowerBuilder will be able to generate SAP River RDL code and thus become a code and (hopefully for SAP) a developer hands contributor. If that will work out - we will see - but it's good to see, that SAP leverages all developer assets and  mind share it has to its disposal to get River off to a running start.

        Easy Access to SAP Technology 
        A unified developer license covering all major platforms and technologies significantly simplifies the developer experience and makes it easy to get access to key SAP technologies. 

        MyPOV - Always good to simplify licensing for developers - the details will have to be hashed out. But given that it looks like Schmerder created and updated it - I am pretty confident it's the best that SAP was able to do at this point. Others will dissect and find the issues.

        Other key takeaways

        As we talk about rivers and the event was in India - not surprisingly there an Indian River inspiring a new codename - the Ganges. And SAP Ganges is the latest attempt of SAP modelling the commerce happening between its customers. Remember SAP tried in the dot com boom era with SAP markets - and wasn't that successful - but with the Ariba and hybris assets in house, it's well worth it to give it another try. It looked like Mondkar annunced it's development roadmap for Ariba till Sapphire in June 2014 - let's see what we hear and see between now and then. SAP Mentor Dick Hirsch has written a great first take here

        And then Fiori is making progress, progress it needs to make as it was recently (TechEd in Las Vegas) declared as the one and only SAP UI going forward. For that it needs to gain capabilities way beyond it's self service user interface origins - and the demos of the sales forecast and manufacturing planning are encouraging - but there is a lot more that needs to happen for a professional user's user interface, powerful enough to be the user interface from 9 to 5 plus overtime.


        It's good to see SAP courting developers, but so is everyone else. If I put my developer hat on and compare the TechED messages with the ones I heard at Google I/O, Microsoft Build, VMWorld, Oracle's JavaOne, AWS re:invent and Salesforce's Dreamforce in the last 6 months - then SAP and River have their work cut out. All these vendors court the enterprise developers and have larger developer ecosystems than what SAP has ever had control over. Valiant to scramble Powerbuilder folks and kudos for supporting it - but that will not give SAP critical mass. 

        If SAP e.g. had announced support for the super popular docker both to build on and to deploy on HANA - I am pretty sure it would have garnered (even) more attention and ultimately - more apps running on HAN,A. 

        But then SAP is the largest business application vendor and certainly can try to attract developer masses into its proprietary architecture - that's the control ultimately all enterprise giants want to have. But struggles and ultimately little success with Apex show that the proprietary path SAP seems to be taking with River is a very challenging path to success. But never say never. I have lots of respect for Klein and team. 


        And you can find the Storify here.  

        0 0

        Since quite some time there as been wide agreement on some criteria for cloud based applications - namely that they had to support multitenancy, all the way down to the database level, they had to be deployed to a public infrastructure, they would only have one production line of code and so on.

        Already slaughtered - no customizing

        For a long time the cloud application vendors have been maintaining, that they cannot support any form or shape of customizing - as they otherwise would no longer have a cloud application. 

        In my view this was a little bit of a self serving argument as it allowed the vendors to move fast and with little complexity from release to release. As a matter of fact, the agility the cloud provides is most beneficial to vendors.

        But to be fair more and more vendors start to support some  more or less elaborate ways of customizing their cloud applications. So already in 2013 we did not hear the  moniker - 'we are cloud we don't allow customization'(much) anymore, which is good progress.

        The first to go in 2014 - database multitenancy 

        We have already written in mid 2013 that database multitenancy - as we knew it - as being a database containing rows of data owned by different clients - is largely an architecture of the past. It was largely required due to hardware constraints for the very first cloud architectures - but should not be deployed for a modern cloud storage in 2014 and onwards. Too many advantages speak for the end of database multitenancy - most prominently access security, predictable performance, and operational advantages. 

        You could argue that database multitenancy has already disappeared in new, state of the art cloud architecture - but probably 2014 will see this cloud architecture feature of the past being driven close to extinction in modern cloud applications. 

        Next - public shared infrastructure

        A table stake of cloud architectures used to be, that cloud applications had to be deployed on public and shared infrastructures. And while that is desirable for most applications, there are more companies out there, that do not want to have their applications being hosted on a public infrastructure. Some may say this is triggered by the whole NSA / Prism sensibility, in my view, the saturation of early cloud adoptions and the need of cloud application vendors to grow revenue wise, play an equal important role. 

        And the vendors are reacting and gearing towards that - the AWS government cloud is an example. Salesforce supporting the HP Superpod goes into that direction. And then with most cloud application vendors embracing OpenStack, a deployment of their cloud applications on a on premise OpenStack infrastructure is technically possible and in my view - likely in 2014.

        Of course cloud purists will now roll their eyes - and start to argue... we will see what 2014 brings.

        And then - one release for all

        As cloud applications get more and more adopted - it becomes more of a challenge to upgrade these applications centrally and synchronously for all customers. Cloud vendors have for the longest time argued (and in my view even a little hidden) behind the fact, that if a single customer would have their own version of code - they would be no longer a cloud application. That's of course not accurate... and with more flexible deployments in 2014 - we will see cloud vendors to begin supporting different code levels by customer.   

        Implications for customers

        It will be key for customers to make sure their vendor supports the more complex code deployment landscape that results from slaughtering some of the sacred cows. Do not take 'that's not cloud'as an answer anymore in 2014.

        Implication for vendors

        If you are not revisiting you code delivery and application configurability and deployment options, 2014 maybe a rude awakening for you. Better to disrupt and be early on these trends than be disrupted by the competition. Look at OpenStack as the easy way out, that a number of the larger cloud vendors have already adopted or at least are heavily looking into. 


        Nobody knows what the future holds - otherwise those who knew would play the lottery and win every week.... but it's about time, that the criteria that defined a cloud application for the longest time - are being revised by the market. Even sacred cows do not live forever.

        In my view - a lot of that will happen in 2014. 

        P.S. Don't miss the 2014 cloud trends that fellow Constellation Research colleague Ray Wang and I have put together here.

        0 0

        In a press release today IBM announced it's intention to extend its current data center landscape from currently around 20 to up to 40 data centers in 2014, effectively doubling the data centers running on SoftLayer architecture (our take on the acquisition is here). To get there IBM plans to invest 1.2B US$ in the coming quarters. 

        And with that IBM kicks of the cloud data center monopoly race - as we borrow from the popular board game. Of course the board game's most prized possession - the Boardwalk - does not apply to the cloud data center race - as the player will not be able to own a country exclusively. But a land grab it equally is - as you can only build a finite number of data centers in a year, and put only so much fiber in the ground. So being early and fast matters.

        Location, location, ....

        We may be seeing the old brick and mortar retailer adage applied to cloud data center locations in 2014 and going forward. Location matters for a variety or reasons, the most prominent among them are:

        • Data privacy compliance - Many countries have strict requirements in regards of and where local data can be stored. The industry and customers have been largely ignoring this and been looking the other way. The conversation with prospects and customers to get cloud loads gets significantly easier when you have an in country data center.
        • NSA / PRISM fall out counter - The recent NSA / PRISM scandal gives a number of new and extra concerns to enterprises about naively using the cloud. Local and national data centers, under local jurisdiction, help to address this concern.
        • Performance gains - Quite rightfully many users in the world complain that a number of cloud apps work best in Silicon Valley, work ok in the US - but try them in Cape Town or Melbourne. As much as vendors want it - the internet is not a consistently performing system and placing data centers closer to users benefits their cloud application user experience.
        • Reliability gains - Ever since the widely publicized lessons learnt from AWS clients not using the AWS availability zones correctly - the smarter enterprises have realized that their cloud processes and data need to run in more than one location. Additional data center locations help enterprises to design an optimized reliability strategy.

        What will be the 2014 prized possessions?

        During 2013 the prized possessions for IaaS vendors were Australia and China. If memory serves me right, it was AWS to start the race for Australia and Microsoft to start the one for China. Now we see IBM leading for new countries like e.g. Mexico. The next week and other vendor announcements will show what the prized possessions will be in 2014.

        IBM InfoGraphic from here

        Load drives the location strategy

        As in the past, IaaS providers will have to chase load to make their investments work. All of them bring significant load by themselves - IBM for instance has over 100 SaaS applications, AWS its online business, Microsoft the XBox,, etc, Google its search and apps load and of course SAP and Oracle their respective SaaS apps. 

        What's interesting to note is, that when it comes to sources of different load - IBM has been able to conserve the strong SoftLayer gaming presence. This was and remains one of the fortes of the SoftLayer architecture and drives a valuable and high end load profile. It's good to see that game developers keep trusting SoftLayer after the IBM acquisition and IBM being able to expand this high end load profile. 

        As an interesting side note - two of the data centers will be dedicated to US government cloud loads - a business all cloud infrastructure providers are competing for. 

        IaaS vendors need to overlay their existing and expected load profiles (from new subscription revenue) - to chart their planned and next data center locations. As all the players move beyond 20 locations in 2014 - we do not expect the coverage to differ significantly yet - once we move post 40 locations we may see different worldwide coverage. 

        Watson throws his weight in

        With last week's IBM announcement to form a division around Watson - it's no surprise today's announcement includes using any Watson load to the new data centers. Depending on the success of IBM selling Watson, it could quickly become the largest driver of IBM cloud load. The interesting aspect to watch is that due to the high performance nature of the Watson applications, IBM maybe moving more into the direction of high end cloud infrastructure. It remains to be seen if IBM will join Google in that positioning - or if it will provide a two tiered cloud performance offering. Both require investment and that's what we are seeing. Going forward the location of new Watson clients may give an indication of IBM's further cloud data center locations. 

        SoftLayer is key

        As previously blogged, SoftLayer is central to IBM's cloud strategy. All new data centers will follow SoftLayer design and code, the separate, triple network architecture, the flexible and transparent deployment options, the single API design, the central admin console etc. 

        Its impressive how fast IBM is moving to the SoftLayer architecture - and shows at the same time how high the table stakes in the cloud game are. Competitors still counting on IBM moving at former pedestrian speeds will be surprised. 

        Challenges remain

        In the past we have seen cloud vendors to use multiple data center locations - but then deployments decisions would usually put a single, main system into one location - with n backup sites. If IaaS and SaaS vendors want to be in line with local data privacy laws they will need to come up with an architecture to make their system run in a distributed data center environment. Many questions in regards of security, access, code and data replication, reporting etc remain. This will be a significant headache for many SaaS vendors and we expect the smarter, high end IaaS vendors to help their SaaS ISV clients to address these challenges more and more in the future.

        As for IBM, it claims that SoftLayer addresses a number of these challenges and it will be key to follow how quickly IBM will be able to show uptake of these capabilities in real customer success stories - addressing these specific challenges e.g. in the vast IBM SaaS portfolio.  


        All cloud infrastructure players need to continuously invest into expanding their locations and capacities. Credit goes to IBM for being first out of the gate in 2014 and moving the yardstick from around 20 to 40 cloud data center locations. It's now to the usual competitors to respond and we will see their response in the next weeks. Stay tuned. 

        The other aspect is the strong commitment to the SoftLayer architecture - which seems to have invigorated IBM's cloud ambitions and accelerated its cloud roadmap. A lot of 1B+ US$ acquisitions have faltered in the high tech industry - by all indication the SoftLayer acquisition is not one of them. On the contrary. 

        Other IBM related posts
        • Are we witnessing one of the largest cloud moves - here
        • First takes from IBM's Software Analyst Insights - here

        0 0

        On January 23rd Workday officially unveiled its Update 21 to the public in various webcasts, the details of the release can be found in the press release here. Another highlight was the new user interface that Workday is releasing to its customer by end of January. All the highlights of Update 21 can be found here

        A new user experience

        With Update 21 Workday (finally) is moving away from its decision to run its user interface on top of Adobe flex - a decision that in hindsight (that is always 20:20) - wasn't the best. It is never easy for enterprise software vendors to change user interfaces - first and foremost for their customer's users that need to learn a new user interface - but it is also a lot of work for the vendor. Even when you build on a declarative architecture as Workday says they do, it means testing and validating all the screens - not to mention the time to design and improve existing screen layouts. There are simply only so few hours and days between releases in the SaaS world - so moving the whole user experience is a huge task that the Workday team deserves kudos for. 

        Workday's Joe Korngiebel walks through the new User Interface, from here

        And moving to HTML5 as a standard certainly looks like the right decision, too - though I personally still would like to see a transaction heavy, professional user type of design. But that's not only a challenge for Workday - but all vendors moving to HTML5. An indication for these challenges is that Workday - for now - has spared out highly interactive user interface elements like the org chart and the 9 box grid - that still require a flash container. 

        Workday also seems to have gotten away with one of the major challenges of the previous user interface, mainly around piling up pop ups over each other. No enterprise application can probably exist without pop ups, but the current Workday user interface implementation of rolling the pop up over the existing user interface canvas is certainly an elegant approach to the usability challenge. Familiarity with consumer applications on smartphones and users mastering these user interface concepts, should certainly help. 
        The new Workday start screen - from here

        They new user interface is also the farewell to a marquee Workday user interface control - the wheel - that may have overstayed its welcome - sometimes its amazing how long certain interface elements can hang around. Workday has replaced the wheel with a pure list of icons - a user interface paradigm that scales much better to a dynamically growing enterprise application. 

        Moreover, Workday has implemented a new and more powerful search capability - which will certainly benefit line managers and HR professionals. It will require some getting used to though - as it breaks the habits of searching hierarchically through the data model - something not desirable - but users have been trained to do this for decades. It will be very interesting to hear user feedback on this new capability. 

        The New Employee Profile - from here

        Lastly Workday also showed some improvements to smartphone and tablet user interfaces - not that intrusive and advanced as the browser interfaces - but continued progress. Not surprisingly the user experience between browser and mobile / tablet differ - and Workday said that in 2014 that will be something to harmonize. That harmonization will certainly by important for the upcoming recruiting functionality - that was designed along the mobile first credo - but users will certainly use both the browser and their mobile device. And Workday already opened more of the same challenge with Update 21 - enabling managers to hop back and forth between browser and mobile device during the performance management process. The more seamless and smooth Workday can make the user transition between the platforms - the more a win for their users. 

        And some more functionality

        Next to the already mentioned device flexibility to operate a performance review - Workday also added reports to allow companies to deal with compliance and legislation - reports for ACA in the US and the addition of the pretty complex French Registre Unique du Personnel (RUP) - these are key additions. Workday needs to keep in mind though, that more will be needed for ACA compliance than reporting, and that remains a challenge for the industry given the dynamic changes around the implementation of the law. 

        HCM Functional Richness continuous to trend down

        As by now tradition - here is the quick and dirty assessment of the HCM related functional richness in the release - which overall continues to trend down. On the one side understandable - given the need for a user interface refresh - on the other side Workday is still missing key HCM building blocks with recruitment (coming in Update 22), training (addressed through various partnerships), payroll (no additional countries in 2014) and a partnership only strategy for more complex workforce management automation needs. 

        Update 15
        Update 16
        Update 17
        Update 18
        Update 19
        Update 20
        Update 21
        October 2011
        April 2012
        August 2012
        November 2012
        April 2013
        September 2013
        January 2014
        User Experience
        - Outlook Integration
        - Chatter Integration
        HTML5 Support for non IOS devices
        - New modules
        - Global Support
        Workforce Engagement:
        Team Profile
        Professional Profile
        Headcount Planning
        Big Data Analytics
        New User Interface for browser
        Talent Management
        - Talent Reviews
        - Career Interests
        - Cornerstone Integration
        Time Tracking
        Performance Management Enhancements
        Android Native Support & iOS Mobile Enhancements
        User Experience - Configurable Grids for Compensation
        Performance Management across browser and mobile
        - Payroll for Canada
        - Payroll Connector

        Usability Enhancements
        Custom Fields
        More custom fields
        - Notbooks for iPad
        Legislative support - Report for ACA / RUP
        Higher Education Functionality

        170 Enhancements
        207 Features / 80 Brainstorm Items
        246 Features / 67 Brainstorm Items
        Major HCM building blocks are highlighted in yellow.

        As usual we cannot look into what is in the various features of the release, which are trending upwards - and that is a good sign. But then the brainstorm items are slightly down - again - no insight in how functional rich they are.

        And no mention of...

        The highlight of (the last) Update 20 was BigData Analytics - we shared our views here - and though we understand that one hour is not a lot of time - we'd expected an update and further build out of that functionality / content. Along the same lines the new notebook functionality for the iPad was getting a lot of airtime in Update 20 - no mention in the iPad demo this time. And the other Update 21 highlight was an editable grid control - always a popular feature - no mention in the briefing - we hope it made it to Update 21. 

        Again one hour is not a lot of time - but showing consistency (e.g. iPad functionality and new grid control) or progression on new platform capabilities (BigData Analytics) - would be a welcome content direction for future updates.  

        The customer perspective

        In general SaaS customers are quickly learning that the advantages of rapid functionality delivery also has the downside of a continuous testing and validation work load. Workday has addressed this challenge with a slowed down release cycle - but even in a more slim release from a HCM business automation standpoint like Update 21 - the testing and validation effort for customers is going to be significant. Post Update 21 there is additionally the change of user interface involved - which no matter how consumer grade the new user interface is - it will create some bumps on the road to adoption and usage for Workday customers. I would expect challenges with the occasional self service users - and the medium / high frequency user moving quickly to the new user interface. 

        It was good to hear, that Workday is working with customers and listening to their findings and needs - it will be very interesting to hear from the trenches and some real live experiences in the coming weeks. Let's hope for the industry, Workday customers and Workday itself that this will be a smooth transition, as UI improvements are key for the overall success of the SaaS market.   


        Workday deserves high marks on doing the necessary housekeeping and innovation - in this release the focus was on the user experience. But this ties down significant resources, coupled with the large investments to get Financials fit for BigFin and getting the first release of Recruiting out with Update 22, the consequence being less innovation on the HCM business functionality side.

        At the same time HCM automation needs and desires of customers are rising and the competition is not standing still. HCM customers want to have more and more automation from a vendor, delivered as an integrated system that addresses HR Core, Payroll, Talent Management - and to various depths Workforce Management. Globalization forces are hitting enterprises fast these days and global support not only for localization but local compliance including payroll become quickly table stakes. Workday will have to chart its plan to functional completion and more global compliance coverage soon, better sooner than later. 

        My posts on the previous Workday Updates can be found here for Update 20 and here for Update 19.

        0 0

        Earlier this week IBM kicked off their yearly Connect conference in Orlando. Think of Connect as the conference for all things formerly Lotus, collaboration, customer and employee experience - and everything around the Kenexa products. With over 5000 participants the conference was well attended, in a Q&A session IBM shared, that over 35% of attendees are first time attendees. That's an encouraging sign for renewed interest around the products being part of Connect.

        Opening Keynote Takeaways

        Anybody sleepy for the (early) 8 AM start was certainly woken up by American Authors, Jay Bear led through the keynote with his usual flair and Seth Meyers told us how social media has changed recruiting in the comedy space. It used to be - have you seen the comedian? - today it's - what is your Twitter account?

        Notice Hayman's shoes - un-IBM-esque
        30 minutes in it was up to Craig Hayman to bring in the over arching message - not only is there a re-branding in regards of product names with Connections in the bushes - but the overarching message is around rhythm - something Hayman made plausible - as it's all about orchestrating employees in the right rhythm - but it was somewhat lost later in the keynote and other sessions.

        Check out my colleague Alan Lepovsky's blog - he will have something on the re branding to Connection and the overall social and collaborative take aways up there soon. 

        From the cloud perspective the interesting part is that IBM announced to move Domino Application capabilities over to a PaaS powered by - no surprise here - SoftLayer. A good move for IBM to bring more load to it's SoftLayer data centers and help customers to renovate their Domino Applications.

        Demo Showcase - well done - but still dizzying amount of product 

        IBM faces the challenge on how to tie together the diverse product set shown at Connect in a compelling keynote demo. And IBM did a good job - picking a bank as the showcase - with IBM employees picking roles that dealt with customer experience, talent management and mail / collaboration. The product scope is dazzling - and while IBM did a great job of showing the product name in the middle of a giant screen - as it was demoed - it was hard to keep the overview. 

        IBM Social Learning as part of the demo with Abby Euler and Tim Geisert 
        Jeff Schick then walked through the new product announcements in detail - but it would have been good to know what was really new during the demo. 

        Customer testimonials by Pepsi, Performance Bicycle, Sika and Petrobas were powerful and spanned the range of products well.   

        Kenexa Talent Suite

        Not surprisingly Kenexa is bringing together its various products in a talent suite, complemented by the new Connections collaboration suite. So in detail the Kenexa Talent Suite comprises

        • Talent Acquisition - which brings together recruiting, assessment and onboarding. 
        • Talent Optimization - which brings together performance management, succession and compensation planning
        • Social Networking - basically the capabilities of the (newly branded) Connections products
        Additionally IBM has a BigData angle on talent management and - again not surprisingly - Watson is there to help - with the recent announcement of Watson Foundations. 

        Kenexa keeps growing

        IBM acquired Kenexa about 14 months ago - and it's interesting to chart the strategy IBM is taking - here are some of the pointers we learnt about in Florida:
        • Suite building  - Not surprisingly IBM is bringing together the Kenexa products in a suite. Good for customers  - but we will have to see how well integrated this new suite will be.
        • New functionality - IBM announced the new Social Learning product - which basically brings social capabilities to the learning process - both on discovery, sharing, usage and creation of learning content. A proof point of the strategy to enrich Kenexa capabilities with Connections capabilities.
        • Complement with Connections - As HCM systems are all about people and people need to cooperate - this can be a leg up for Kenexa - tight integration to a collaboration suite. The area to watch is, what IBM will do with customers not using IBM collaboration products.
        • Complement with Watson - Watson's analytical capabilities are interesting for many of the decisions that can be automated in a modern HCM system.
        • Move to SoftLayer - This is IBM's cloud strategy and it's being executed for parts of the Kenexa products in 2014.
        So as Kenexa goes in it's 2nd year under the IBM umbrella, it keeps growing in all possible directions - adding more core capabilities, bringing products together in a suite and enabling and complementing products with the capabilities of other IBM technology products.

        Product vs Services

        One of our concerns around enabling more decision management and analytical processes - is that these are usually service driven businesses. And IBM certainly has enough smart consultants and data scientist to deliver amazing value to customers in these engagements. But the holy grail for analytical software is to enable these decisions in software, without a professional services intervention. This is not easily done - neither for IBM nor its competitors - but IBM's strong professional services DNA may not help the productization process. 


        Overall good progress on all fronts by IBM for all products being part of IBMconnect. Bundling Kenexa capabilities to a Talent Suite is a good move - but to a certain point an overdue one. Complementing Kenexa with other IBM capabilities (e.g. Watson) makes sense - but these need to be productized - not services offerings in order for IBM to protect license revenues vs. the usual competitors. 

        If IBM can pull off the productization process, Kenexa will have a strong DNA of differentiators vs the competition. Always helpful when selling enterprise software. 2014 will be a key year for IBM to make this productization happen - the out of the box delivery of analytical, BigData, collaborative and Watson capabilities as a productized offering complementing Kenexa talent processes. 

        You can find a Storify collection of the keynote here

        0 0

        Oracle had their first HCM World conference from February 4th to February 6th in Las Vegas and it rolled out the big guns - both CEO Larry Ellison and President Mark Hurd presented - something you usually do not see at an Oracle conference - except for Oracle OpenWorld. It was good to see both executives so well versed in people processes and the HCM pitch.  

        Nobody questions Oracle on technology, so whoever had the idea to de-emphasize the technology aspect for the sake of the thought leadership aspect, deserves the big kudos on the Oracle side. Apart from a demo between Mark Hurd and Gretchen Alarcon - software was relegated to the many track sessions. And beside from Larry Ellison's presentation and a few glimpses of Chris Leone's - it was never the talk about the technology - but always about the challenges and struggles enterprises and Chief People Officers face every day. And the good news was - Oracle lead with the business problem - and not the technology, a radical departure from the past the traditionally wanted to convince customers mainly on the technical merits of its products. Doesn't work for the HCM audiences, so good move to adjust to it.

        Oracle HCM Cloud Progress

        We were surprised by the uptake of momentum of Oracle HCM cloud back at OpenWorld - but can now report continued momentum in customer uptake. Oracle is close at moving calendar date and go live progression on a proportional scale - which is a pretty encouraging sign. Customer go lives are all over the globe, speaking for a good sales effort beyond North America, which can be a challenge for US based vendors.

        More importantly, the road map is not fizzling out in terms of functionality richness, and the upcoming release R8 (sorry all details were under NDA) includes new functionality in the areas that are hard for all vendors to create and  maintain. It was equally good to see that Oracle is tackling (and needs to keep tackling) the integration issues - both horizontally with other applications as well as internally to Oracle HCM Cloud between the Oracle Fusion HCM side and the Taleo side. This integration remains one of the weaknesses of the Oracle HCM portfolio - but Oracle is confident that it can convince customers to accept the integration issues due to significant functional leadership on the recruiting side. And while that functional leadership is certainly given for now - it will be an area to keep a watchful eye on in the future.

        Oracle also had the bravado to let the analysts play on the new iPad app - and can now successfully claim that even analysts can use their HCM system. Few other vendors have given the opportunity to the analyst community and this certainly raises the ante from a competition perspective. From the 90 minutes using the iPad app we can say that it's highly functional, easy to use and built in such a way that it is highly likely to make an every day user (like a manager) happy using it.

        Next steps for Oracle HCM cloud

        As mentioned most information about the roadmap was under NDA - but Oracle lifted the kimono on a few areas that can be shared:

        • Workforce Optimization - Oracle plans to get into the workforce management arena starting with the support of mini and macro re-organizations. Oracle HCM cloud will allow to analyze and predict worker and / or organizational challenges and opportunities, model and plan actions derived on these findings and then execute on them at a certain point in time. 
        • New and expanded Mobile HCM - Oracle plans to deliver a new mobile experience both for tablets and smart phones. Interestingly Oracle also offers some offline capabilities - something of immense value for real mobile work warriors. 
        • BI Benchmarks - Oracle plans to provide more insight around the recruiting process - leveraging it's strong market position with Taleo, which enables 11% of all US hires, add 1.2 million job openings per quarter and features profiles of 417 million job candidates. 
        • Work Life Apps - Oracle donated a fitbit to HCMWorld attendees and organized a step competition amongst the attendees - a great showcase of the soon to be released Work Life Apps the company is planning. 
        Screenshot of the new HCM Cloud User Experience

        Social matters

        Oracle - starting with Larry Ellison - did not get tired at stressing how important the social aspect of HCM are and how well Oracle HCM cloud can support them - given that Oracle Social Network (OSN) is baked into the platform that Oracle HCM cloud is build upon. 

        The scenarios Oracle stressed were - not surprisingly around 
        • Collaboration - Enable users to collaborate and find expertise in the context of their business process. 
        • Discovery - Enrich employee profiles and job descriptions with reputation and influence in social media communities. 
        • Referrals - Syndicate and post open job requisitions and source candidates through social networks.
        • Consolidate - Bring together all social media activities for HCM. 
        Oracle is also shipping a number of social HCM applications - of which two are publicly known. The first is My Reputation, a capability that analyzes social activity, allows also for integration with 3rd party social networks and rewards users for participation and brand building activities. The second was  more enablement around social sourcing, allowing for automated employee referrals, job posts to Twitter and Facebook and capabilities to build talent communities. 

        These are all compelling scenarios for HCM professionals. The main challenge Oracle has with the integration of OSN is the harsh user interface change between the Oracle HCM cloud user interface and the OSN interface on the iPad product. This maybe temporary - but better be addressed by Oracle, sooner than later. 

        BigData  Plans

        Oracle sees two aspects of BigData and HCM. On the one side the enrichment of all HCM data with BigData, e.g. the capture and availability of social streams and feeds - on the other side the capability to benchmark on the huge data available across the Taleo customers. In both cases Oracle has the technological capability to pull both aspects of - with the second scenario having more immediate impact on the recruiting function.

        Integration matters

        As mentioned before - the Fusion DNA and the Taleo DNA are on different technology platforms and Oracle has chosen to integrate Taleo - not to re-write recruiting functionality from scratch on the Fusion Applications technology stack. On the positive side it means that Oracle has very good - some say market leading - recruting functionality at hand - on the negative side it means there is an integration effort that needs to be addressed in every customer implementation that spans across Oracle HCM Cloud. 

        The good news is, that the company is looking at providing better integration, starting with a single master data set for work structures, employee master and an unified profile. Oracle also pledged a consistent user experience across mobile and desktop and common reporting and analytics - making the different back end systems a non issue for end users. The most important ambition though is around the convergence to a single technology platform - including a common programming model, common exstensibility framework and a single security and role framework.

        Oracle even went so far to release a roadmap - unfortunately under NDA - but rest assured the analyst community will monitor progress and we would applaud publicizing it sooner than later. Oracle should be encouraged to do so - as the public plans for integration between Fusion R8 and Taleo 13B got a very good review with clients, prospects and the partner ecosystem.

        Peoplesoft - Alive & Kicking

        The Peoplesoft update by Paco Aubrejuan was again very valuable, it remains intriguing how powerful and strong the product and brand are. Customers keep moving and upgrading - all the way to the latest release, 9.2. And if Oracle keeps supporting customers well, the end is not in sight (yet). On the contrary Oracle is investing  more into Peoplesoft, even with latest technology - as the press release around new in memory applications for Peoplesoft HCM. It will be tough to find a software asset acquired approximately 10 years ago that get an in memory capability added in 2014. Ultimately a showcase that Oracle keeps investing in acquired products and creates values for customers using these products - the whole Apps Unlimited program (our take here). 


        All in all a very promising first event for the expected HCMWorld conference series. Oracle talking business best practices is a welcome change, the investment and  momentum around Oracle HCM Cloud continue with traction and the roadmap is rich in new functionality. Even though Oracle has come a long way on user experience, it now needs to tackle cross product suite usability concerns (between Oracle HCM Cloud and Oracle Social Network) and share and deliver to it ambitious integration and convergence roadmap for the underlying technologies behind the different pieces of Oracle HCM Cloud.  

        0 0

        In its earnings press release Workday of February 26th, the company disclosed the acquisition of Identified, a San Francisco based start-up that had specialized on recruitment success using analytical algorithms.

        A recap of Identified

        The company has tremendous talent and started with condensing social network information into tangible (and if you will actionable) candidate profiles, compiled out of available information from social networks. With that Identified reverses the classic recruiting process - that expects candidates to come to the recruiting company - into a mini headhunting process - where the recruiter can start actively looking for candidates.

        Needless to say the collection, identification and creation of a candidate profile are no trivial task and Identified spend significant time on this, creating their patent pending SYMAN process. The recent demos of the product then showed the search over these profiles. Back in January we missed some analytical tools beyond the candidate profile that would help the recruiter focus on the right candidates without going through the overall result list, e.g. a scoring model to serve the best candidates to the recruiter. Good news was - this was on Identified's roadmap.

        Identified Screenshot from website.

        But then it looks like Identified seemed to have run out of steam on the business side. And probably needed more capital to build engagement and interaction functionality for recruiters, more ATS functionality and also had to cater to the need for mobile support for recruiters. The company had taken 22.5M in funding, a financial analyst disclosed the acquisition was 15M - not a good exit unfortunately. 

        Why Identified for Workday?

        Workday is currently building its recruiting solution, expected in the first half of 2014, but its first release will miss sourcing functionality. My expectation is that Workday will do a replay of the playbook we have seen with 3rd party compensation data. Use its new BigData Analytics product capabilities to either look at social network data and / or the classic sourcing providers and put it onto the AWS based storage for BigData Analytics. Then use the SYMAN algorithms to collect and condense to a candidate profile and serve these to the users of its new recruiting product.

        This has a number of pretty intriguing consequences:

        • Innovative Recruiting - Workday will be able to serve the best external candidates into a recruiting scenario. Ideally not even to the recruiter, but to the manager opening the requisition.
        • Real Analytics at work - The condensation and serving of the best candidate requires some real analytics (the one that take an action or suggest something). Identified can provide them - if the can find the way to serve the best candidates. (More on real analytics here).
        • Workday BigData Analytics at work - Many observers and me have given Workday a hard time on the current BigData Analytics offering to be neither of those. If Workday does what Identified enables them to do - this would really be BigData (structured and unstructured data with the 3Vs - Identified demos claimed to run on 1B+ candidates) and Analytics (condensation, scoring).


        If all that happens (and it seems likely to me) Workday will have a very good value proposition for its recruiting product and deserves kudos for bringing some very innovative business process into recruiting, differentiating it from rest of the vendors. I have expressed my concern that Workday wasn't showing enough thought leadership in its position as a HCM SaaS leader in the recruiting area, but this may well change now. The next months and briefings will tell. 

        Good for Workday customers. Good for Workday talent - an acqhire. Not so good for Identified investors.

        0 0

        Over  11,000 attendees convened at the MGM Grand in Las Vegas for the annual IBM Pulse event.  The event focused heavily on cloud and mobile and served as the backdrop to unveil BlueMIx, Big Blue’s PaaS platform for year’s to come.  Adding to the seriousness of the announcement, IBM, made a $1B commitment to BlueMix.  Side note: We have learned from recent announcements around SoftLayer data centers and Watson that when IBM is serious about something, it makes a one billion dollar investment.

        Conversations with attendees indicate a very energized and excited crowd.  Concurrently, IBM organized a developer conference, trendily called Dev@Pulse

        BlueMix - lot's of Blue on a Greenish platform

        BlueMix is IBM's PaaS product for the years to come. The project started over two years ago, was shepherded under veteran Danny Sabbah and must have gone through some pretty substantial iterations. Remember two years ago IBM had not invested into Pivotal - so there must have been a (true) blue ambition at some point. In fact, the future will tell what led to the investment in April 2013 (more here).  The shift was a change for the better since it committed IBM into a community of multiple vendors which this week led into a new oversight committee for Pivotal. 

        IBM Infographic on the 1B investment.

        So under the hood BlueMix runs on Pivotal's Cloud Foundry (hence the green - or what color is Pivotal's branding?) - but IBM enriches it in many ways:

        • Attractive user interface - IBM has done a good job creating an attractive user interface to run and tie applications on BlueMix. Of course you can go back to command line, but interestingly IBM shared that developers want to be more productive, too - so we will see how much mileage the overlaying user interface will get. 
        • 'Any' programming language - IBM devised BlueMix to be open and for developers to bring their code and 'just' deploy it. And that's a great design ambition but of course hard to do in the real world - so today IBM can do Java, Javascript, PHP, PERL and Ruby (not all supported by its own IDEs though). 
        • Patterns prominently loom - IBM's pattern technology, originally devised with WebSphere - is getting good usage as part of BlueMix and is probably key to make the magic in the background work. Environments get more complex by the day - so the automation tool and it's capability are key for BlueMix's success. 
        • A rich platform - IBM keeps exposing services and tools to BlueMix - most prominent right now are the Xtify push services. MongoDB was there and of course Cloudant (freshly announced acquisition) now, too. The extensibility here is key for IBM to augment BlueMix in the coming quarters, Watson services will be the clearly differentiating crown jewels. 
        • Of course, SoftLayer - Almost needless to say IBM will run BlueMix on SoftLayer technology. Probably a good move to cater to a security sensitive audience that is re-assured by SoftLayer's bare metal capabilities, extended localization and transparency down to the machine level.
        Crosby with 'born on the web' SoftLayer customers

        Softlayer is the go forward answer for all things cloudy

        Not surprisingly IBM keeps strengthening its SoftLayer commitment. Reading between the lines it is clear that the SoftLayer x86 legacy cannot support the Power based Watson plans - so not surprisingly IBM is bringing SoftLayer to run Power based systems. There was some confusion what's being brought to who - SoftLayer to Power or vice versa - but that is all good for a company like IBM finding and charting its course to cloud.

        Likewise IBM will invest more into the DevOps visibility and capability of SoftLayer resources. SoftLayer had more of a run-time DNA in the past for their clients, so bringing the additional flexibility to run more and better development cycles with SoftLayer only augments the platform.

        LeBlanc introduces BlueMix

        BlueMix Moves Beyond the Big Blue Legacy

        There are a few takeaways that this is not our father's IBM: 

        • CloudFoundry - IBM used to build it all themselves - now it shares a foundation with co-opetitors like HP, SAP, VMware, EMC etc - an interesting aspect - that we e.g. also see in the open cloud arena with OpenStack.
        • Standards based - IBM has always been a promoter of standards, but was also large enough to create standards by its sheer size. Taking outside standards and building out on them is the strategy for sure now.
        • Developer focused - IBM was always good at building software with the user in mind. But the developer focus is certainly new and it was impressive to see to what length IBM has gone to understand the (shifting) needs of its developer base.
        • Openness - Again IBM has always been a supporter of open. But it was often more on paper and marketing than in every day IT life. It's good to see the openness moving way beyond the marketing messaging and into the product DNA.
        • Cloud First - This is probably the first product that IBM provides for developers - on the cloud first and for now - only. Good proof for IBM acknowledging the shift in the approach to tackle future revenue streams. 

        Coding on stage - pair programming with Lawson and LeBlanc

        The bigger picture - 21st century enterprise applications

        At the end of the day the question is - what will 21st century enterprise applications look like, how are they build and on what platforms do they run. We know the traditional ERP suites of the late 20th century are not the answer for a digital economy. We also know that building and managing software is getting more and more complex and more automation to handle this complexity coupled with less lock-in are promising directions for the future of building applications. Which leaves us to the only thing we know - that the 21st century apps will run on virtualized environments in the hybrid multi-cloud.

        More to the applications themselves - it's clear that IBM is gambling on the API economy promise - to be able to bind together the APIs of IBM SaaS properties, with other non IBM APIs on a powerful platform. We know that this platform is BlueMix, which makes it a very strategic asset for IBM’s and its customers and partners future.

        Implications, Implications...

        So what does it mean for ...
        • IBM customers & prospects - You should actively look at BlueMix if you have an immediate application development need. IF you are a Rational shop, JazzHub is a good way to get started but you have probably many questions for IBM. Otherwise wait how IBM will build out BlueMix over the next quarters. If you are a customer of the IBM SaaS pay attention on the roadmap and functional richness of future releases.

          Prospects should compare BlueMix with the usual suspects out there - Amazon's AWS, Microsoft's Azure, Google GCE come to mind (watch for the end of March events).

          Both customers and prospects should be careful not to overcommit until pricing and licensing are clear - which IBM has not published yet. 
        • IBM partners - IBM has put the cards down for both it's PaaS and its 21st century Apps strategy - if you want to be part of it, time to start evaluating BlueMix and chart a strategy of differentiation in the partner ecosystem. And look at the patterns - this could be a key to more efficient engagements and potentially even the birth of an IP strategy for you.

          Prospective partners who want to get a slice out of the IBM ecosystem have now a chance to jump in the mix - it's pretty much year 0 for the API economy. Good time to start.
        • IBM - IBM needs to keep adding and building more services and APIs into BlueMix. Early references, success stories will be key to show impact and get the large - and mostly conservative - IBM install base to move to BlueMix faster. Obviously, get the pricing right, easier said than done. Look into exposing more products that were features at IBMConnect as a differentiator to other cloud PaaS out there. Publish a road map both for the SoftLayer expansion to Power as well as the addition of APIs from the SaaS portfolio to BlueMix.
        • IBM competitors - Take a look at the broader perspective in what IBM is trying to do. If you compete in PaaS with IBM - you need a API strategy. If you compete on IaaS - you already know what the bare metal threat is for your business, so position and strategize accordingly. If you compete on SaaS - decide what the future for your 21st century apps architecture is. 


        For the longest time the ultimate application strategy for IBM has not been clear. Acquisitions seemed to be opportunistic and all over the map - at least from the outside. But maybe it was always the plan to move to the API economy - ultimately. Only Steve Mills will know. And interestingly Mills spoke about being able to do things better and a greater scale than ever before (see interview here). 

        BlueMix is the key asset as a platform to get IBM to the API economy, from a PaaS perspective and SoftLayer takes care of the infrastructure.

        In spring 2014 IBM's future looks remarkably better than 12 months ago.

        Here are some Storify Tweet Collections of the keynotes of Day 1, Day 2 and Day 3.

        0 0

        In a virtual press conference SAP’s Vishal Sikka in conversation with Jonathan Becher went through two press releases that the company just published.

        Let’s look first at the key statements in the cloud press release:

        SAP announced new offerings for SAP HANA Cloud Platform. Customers now have the ability to choose from three offerings: SAP HANA AppServices, SAP HANA DBServices, and SAP HANA Infrastructure Services. 

        MyPOV – Good for SAP to make products / services more consumable – the separation of Apps, DB and Infrastructure services makes sense from a SAP perspective, given HANA’s prominence in marketing and product plans, but may confuse buyers who are either used to a complete shield of physical details (e.g. or a more granular details (e.g. Amazon’s AWS, Google's GCP).

        SAP HANA Offers New Pricing and Consumption Model New pricing options for SAP HANA intend to broaden the reach of the SAP HANA platform and make it easily accessible to everyone starting with a base price with add-on options available as desired. Customers buy through a consumption model and can either implement end-to-end platform use cases or choose additional options as needed such as predictive analytics, spatial processing and planning. This significantly increases the opportunities to get started with SAP HANA and affords customers the ultimate ability to innovate. 

        MyPOV – More granular pricing is usually a good strategy, so kudos to SAP. But SAP needs to pay attention to not get too complex and get the pricing right. Too early to tell if the first release has achieved that.

        SAP HANA Cloud Platform Delivers Choice With New OfferingsTo address the growing apps economy, SAP announced new and enhanced offerings for SAP HANA Cloud Platform. Startups, ISVs and customers can now build new data-driven applications in the cloud. This platform-as-a-service (PaaS) offers in memory-centric infrastructure, database and application services to build, extend, deploy and run applications. SAP HANA Cloud Platform is available today via SAP HANA Marketplace. Customers can gain access to SAP HANA in as little as 30 minutes and immediately benefit from a unified platform service for next-gen apps, and can easily buy, deploy and run with the flexibility of a subscription contract. 

        MyPOV – Unfortunately the press conference was light on more details on SAP’s PaaS strategy. Earlier in the week my hope was that SAP might unveil something in conjunction with Pivotal’s CloudFoundry, similar to what IBM has done with CloudFoundry and BlueMix. And many of the apps services – around analytics, mobile etc. are similar to the ones offered on BlueMix. But overall I remain undecided on SAP’s PaaS strategy till we hear more – still overcoming the River / RAD announcement and plans. Which may be the reason SAP will come back with more PaaS details later in the year.

        A very good representation of services by Matthias Steiner from here

        SAP HANA Marketplace Simplifies Purchasing ExperienceThe new SAP HANA Cloud Platform offerings are available for a simplified trial and purchase experience in sizes ranging from 128 GB to 1 TB of memory on SAP HANA Marketplace, an online store that lets customers learn, try, and buy applications powered by SAP HANA. SAP Fiori™ apps and hundreds of startups and ISVs offerings are also available on the site. 

        MyPOV – The improvements and new UI of the HANA marketplace are probably some of the key advancements we have seen today. It looks simple and easy to procure apps and services, though I have not purchased any. Fiori looms prominently across all offerings, but I am not sure how Fiori will run in combination of the market place and its dependency on the Suite being somewhere available. Finally the monthly prices do not foretell a good story for the elasticity of the offering – one of my concerns around HANA since the early days.

        SAP HANA Innovation Breakthrough With SAP Genomic AnalyzerThe SAP HANA platform continues to change the world of compute and what is possible. SAP is paving the way for real-time personalized medicine with SAP Genomic Analyzer, a new application powered by SAP HANA that aims to allow researchers and clinicians to find breakthrough insights from genomics data in real time. Currently in the early adoption phase, key planned benefits include faster data processing through various stages of genomics pipeline — alignment, annotation and analysis — and immediate analysis of data in minutes rather than days. Researchers are envisioned to be able to analyze genetic variants of large-scale cohorts to find patterns of variation within and between populations. With better identification of clinically actionable genetic variants and real-time visibility into “in-the moment” situations, clinicians shall be able to understand and personalize care to patients with diseases such as Type II diabetes. 

        MyPOV – Looks like SAP is on to something with genetic processing. When the product was first announced my un-representative sample of scientists I know and polled dealing with genetics shrugged their shoulders. Fast forward 6-8 months and they are asking me about HANA, a good turnaround for SAP.

        In a separate press release SAP said, that it has broken (together with partners) the world record for the largest data warehouse:

        This new world record demonstrates the ability of SAP HANA and SAP IQ to efficiently handle extreme-scale enterprise data warehouse and Big Data analytics. SAP and its partners had previously set a world record for loading and indexing Big Data at 34.3 Terabytes per hour.
        A team of engineers from SAP, BMMsoft, HP, Intel, NetApp, and Red Hat, built the data warehouse using SAP HANA and SAP IQ 16, with BMMsoft Federated EDMT running on HP DL580 servers using Intel® Xeon® E7-4870 processors under Red Hat Enterprise Linux 6 and NetApp FAS6290 and E5460 storage. The development and testing of the 12.1PB data warehouse was conducted by the team at the SAP/Intel Petascale lab in Santa Clara, Calif., and audited by InfoSizing, an independent Transaction Processing Council certified auditor.
        MyPOV – This announcement is key as HANA critics have been stating that in memory is too expensive to run large scale data warehouses there. We now have proof of scale – but barring details on what was done, the business benefit and the cost of running at 12.1PB data warehouse are screaming for SAP to provide more details.


        SAP delivered some key advances around moving its offerings to the cloud, in an entertaining and informative format – I hope to see the Sikka / Becher combo soon again – as it worked very well. 

        But then I would have preferred for SAP to provide a more practical slicing of services for HANA: Price the cost for storing data in memory, maybe moving it on or out to another medium, processing it and any networking costs to get it there. These are the intuitive options for cloud database pricing. It’s fair and fine to bundle – but that’s the parameters of costs incurred. Sooner or later SAP will come hopefully closer to these categories – or better ones. 

        In the meantime SAP deserves kudos for moving into the right direction.

        Lastly I still see SAP as an enterprise software automation company (despite my own findings) – and not a technology company in the first place – so talking about hardware specs and performance is still something I need to get used to. But then SAP should be the one vendor talking about the business impact and benefits of technology with flair. At least once every 10 minutes of a press conference.

        I compiled a Storify tweet collection, too - you can find it here. Have a look the twitter convo with the pundits is worth it. 

        0 0

        We had the chance to learn from Oracle executives about the company’s progress in all things cloud, at a recent event in Palm Springs. It was good to see that Oracle communicated openly and with candor where the company stands in regards getting ready for and taking advantage of the cloud era.

        First of all there can be no question after this day, that Oracle is and remains committed to the cloud. Thomas Kurian who kicked of the day of briefings was crystal clear on that. All prior irritations of the past are history and there can be no question that the product development executives we spoke to are fully committed to cloud. Maybe even a little too much – as in my view too many products got the as a Service suffix – not sure how many ‘aaS’ products customer will really want to deal with.

        The state of SaaS

        Chris Leone (@chrismleone) presented the Oracle SaaS status and he was a good choice – as Constellation sees Oracle HCM Cloud having the most traction, followed by CRM Cloud and Finance Cloud. None of the colleagues questions Oracle’s push in SaaS. Likewise it was good to see Thomas Kurian speak about the Supply Chain and Manufacturing offering earlier in the day - for whatever reason(s) Oracle is traditionally late to bring manufacturing automation to new platforms, but it’s good to see the traction starting in this area that completes enterprise automation for most enterprises.

        The DNA of Oracle’s SaaS products remains compelling – with its foundations on top of Oracle Social Network and a pretty good mobile architecture. User interfaces are now compelling to use – and it’s good to see executives – from Kurian down – demo the software themselves. And Oracle did a good job showing some thought leadership in a CRM demo – not only showing social interactions, but coining the term digital body language’ (coming from Eloqua)and using the predictive analytics foundation to foretell future sales success.

        From Kurian's presentation.

        New – Information as a Service

        Kurian presented the Oracle cloud having one more offering than the usual trifecta of SaaS, PaaS and IaaS – with Information as a Service. Later he presented one slide on the topic – unfortunately under NDA – but we can all conjure what this offering is all about.

        The state of PaaS

        Oracle brings a very rich product portfolio to its PaaS offering, probably stronger than its IaaS offering (for now). Not surprisingly – and similar to IBM – PaaS caters to developers, LOB execs and IT managers.

        From Kurian's presentation.

        The most attractive offering for developers is certainly the Java-as-a-Service offerings, with an integrated IDE and access to a powerful database. On the LOB executive side social platform with collaboration and the integration / BPM capabilities are attractive. And on the IT side it’s certainly the cloud to cloud iPaaS and APM capabilities (based on EnterpriseManager) that are key offerings.

        Given Oracle’s large install base for its database we expect a lot of interest of existing customers for the Database-as-a-Service (DBaaS) offering. Having an 11gR2 or 12c database readily available to you, with the option of having it fully managed by Oracle is certainly an attractive value proposition for Oracle DBMS customers.

        The DBaaS offering is also one of the best examples right now how the integrated Oracle technology stack (aka redstack) can work together – to enable RESTful HTTP service Oracle ‘just’ bundles in a (lightweight) Java Server. Constellation expects similar under the hood bundling of the redstackto emerge sooner than later.

        Moreover, Oracle 12c pluggable data base architecture makes it easy to move complete database between on premise, cloud and if you want clouds.

        The State of IaaS

        Here Oracle offers Storage, Compute, Identity, (lightweight) queues and notifications. From all three traditional cloud product areas – SaaS, PaaS and IaaS, this is the one where Oracle needs to catch up and move faster. In all these areas it seems Kurian has prescribed the two step program mentioned in the headline of this blog post – get there first and then differentiate. And we are positive that Oracle uses OpenStack standard both on compute and storage… but that of course means questions on how and where to differentiate remain.

        From Kurian's presentation.

        Logically Oracle has begun with Storage and Compute but it has some road to cover before being at par with Amazon’s AWS, Microsoft’s Azure and Google GCE. Credit goes to Oracle execs acknowledging this, but the roadmap looks realistic and only to a certain point ambitious.

        A word on private cloud

        One of the key decisions the Oracle executive team took was to operate public and private cloud offerings on the same technology stack, a decision that has proven right and is paying off already. And customers really appreciate the capability to run a familiar technology stack both on premise and in the cloud, being flexible to move loads across on premise and public cloud. On the IaaS layer the Nimbula Cloud Director is the key asset to orchestrate this capability.

        From Kurian's presentation.

        Needless to say Oracle offers customers plenty of option for consolidation – for servers (with Nimbula), platforms (RAC) and databases (12c multitenancy). We expect customers to heavily look into these areas as they may realize attractive cost savings. It was also interesting to see that Oracle plans to offer very similar consolidation offers for middleware – server (with Nimbula), platform (WebLogic clusters) and Application Server consolidation (multi tenant WebLogic Server).


        Amit Zavery (@AZavery) later had a slide in his deck explaining how Oracle is working with with Microsoft and Verizon, who run Oracle’s database and middlware on Azure and Verizon Cloud respectively.

        From Zavery's presentation.

        No surprise here – but I believe this is the first formal public slide on the subject. We think seeing that Oracle is able to partner / OEM / deliver (pick your best wording) its products is a good proof point of the attractiveness of the products, but also forces Oracle to be open and flexible at certain intersection of its technology stack. Prominent example being support for Microsoft’s Hyper-v hypervisor in the partnership with Windows Azure.


        6 months after checking in with Oracle at OpenWorld, there has been good progress across the board for the cloud products. There is still a lot of road to cover though, but all road maps were under NDA – so we can’t comment further. We still see the three strong converging forces working for Oracle and stay to our three concerns (they are here). But 2014 should be the year where it all comes together for Oracle.

        At the end of the day we can only repeat ourselves – the fully integrated technology stack that Oracle is largely using already and revving for the cloud, remains one of the most intriguing technology stacks being built out there. But with well over 20k developers working on this – this is one of the largest engineering projects in the enterprise software industry ever, so the behind the scenes challenges around interdependencies, quality, ramp-up etc. can’t be overseen. To give Oracle, Kurian and his team credit – there are close to no stories on product problems out there. Let’s hope it stays like this – better for Oracle customers, Oracle and ultimately the whole industry. 

        Also worth a look for the full picture

        • Is it all coming together for Oracle in 2014? Read here
        • From the fences - Oracle AR Meeting takeaways - read here (this was the last analyst meeting in spring 2013)
        • Takeaways from Oracle CloudWorld LA - read here (this was one of the first cloud world events overall, in January 2013)
        And if you want to read more of my findings on Oracle technology - I suggest:
        • Oracle integrates products to create its Foundation for Cloud Applications - read here.
        • Java grows up to the enterprise - read here.
        • 1st take - Oracle in memory option for its database - very organic - read here.
        • Oracle 12c makes the database elastic - read here.
        • How the cloud can make the unlikeliest bedfellows - read here.
        • Act I - Oracle and Microsoft partner for the cloud - read here.
        • Act II - The cloud changes everything - Oracle and - read here.
        • Act III - The cloud changes everything - Oracle and Netsuite with a touch of Deloitte - read here
        Lastly - paying tribute to my Future of Work / HCM / SaaS research area:
        • Oracle pushes modern HR - there is more than technology - read here. (Takeaways from the recent HCMWorld conference).
        • Why Applications Unlimited is good a good strategy for Oracle customers and Oracle - read here.

        0 0

        On March 10th 2014 SAP and Accenture announced the formation of a Business Solutions Group to be formed in San Jose. Enterprise application vendors like SAP have a long history to form partnerships with system integrators (Sis). So is there something different in this announcement?

        Let’s take a look at the press release here:

         Accenture (NYSE: ACN) and SAP AG (NYSE: SAP) today announced the creation of a new business group focused on rapidly accelerating the time it takes to jointly develop and deliver new industry-specific solutions based on cloud and other digital technologies. The Accenture and SAP Business Solutions Group brings together the companies’ respective co-innovation, joint solution development and go-to-market teams in an effort to focus on significantly increasing time to value for clients globally.

        MyPOV – SAP looks for validation points for its technology, HANA being the most prominent product. Partnerships like these show a willingness of the ecosystem to partner with SAP on these products. Accenture - like all other SI’s – is confronted with a significant business shift by the cloud. Traditional very rich revenue streams from in premise implementations of enterprise solutions are at risk by these products being moved to the cloud. And cloud based implementations see lower implementation revenues, no need for onsite upgrading etc. So Accenture is looking for new and additional revenue streams – being the partner of a large enterprise application vendor like SAP gives Accenture new opportunities.

        Note that the press release is vague around the commitments in regards of exclusivity from both sides. Do not expect this to be an overall and exclusive partnership.

        The Accenture and SAP Business Solutions Group helps facilitate a combination of Accenture’s intellectual property and industry expertise with innovative solutions that are powered by the SAP HANA® Cloud Platform to more rapidly enable customers to harness the benefits of digital technologies including cloud, analytics and enterprise mobility. The goal is to significantly expedite the joint-solution development process to help allow the two companies to quickly bring breakthrough solutions to market across a range of industry sectors, with an initial focus on retail and consumer goods, automotive, energy and utilities.

        MyPOV – No surprise – it is all about HANA. And while SAP needs to create horizontal offerings on top of HANA – it’s a good move for SAP to partner with an Accenture to leverage its industry expertise. With a focus on retail and consumer goods, automotive, energy and utilities – the partnership addresses more than a third of the SAP install base – so plenty of potential market to sell. The group structure also isolates SAP from any potential issues of mixing up Accenture intellectual property (sic!) with its internal product development efforts.

        Accenture and SAP are structuring the new Accenture and SAP Business Solutions Group to simplify the process of purchasing SAP solutions and Accenture services for clients. For example, a goal is for new industry solutions to be available through a single-contract business services model and on an “as-a-service” basis. The new arrangement represents an evolution of the work the two companies have conducted together through the Accenture Innovation Centers for SAP Solutions and further outlines the go-to-market framework first introduced in 2013. […]

        MyPOV – Again no surprise – cloud based enterprise products need also elastic implementation resources. Of course humans (luckily) do not scale like IT resources – but coming closer to the dynamics of SaaS implementations and upgrade cycles certainly helps these offerings. And it’s worth noting – but equally expected – that a purchase from the newly formed group will not mean doing business with SAP and Accenture separately on a contractual basis. This will be a welcome simplification business wise for future customers. The Group will have to address support and maintenance questions going forward though. Who will a customer call when something does not work?

        The Accenture and SAP Business Solutions Group will be based in San Jose, California, and the joint team will include some of the world’s leading solution development talent, supported by a dedicated team of sales, marketing, finance, legal and commercial professionals. Experts from both Accenture and SAP will collaborate to identify business needs, develop solutions, execute go-to-market strategies and help implement new solutions for clients.

        MyPOV – Joint expertise centers, center of excellence etc. are nothing new for SI partnerships. That the new group will have its own sales, marketing, finance, legal and commercial professionals is new and may serve as a blueprint of more similar partnerships in the industry. Bringing in the administrative resources gives credibility to the claim mentioned in the press release, that the Group will simplify doing business with Accenture / SAP. It will be even more interesting to see how the go to market will be addressed and solved. Will we see SIs being neutral and partner with different vendors on the same opportunity – or will a significant corporate and IP investment skew the sales preference towards enterprise application vendors where that investment and commitment have happened?

        The first industry solutions developed under this new commercial framework include the Marketing Performance Solution by Accenture and SAP, a marketing analytics solution that provides relevant customer data across enterprise operations, and the Upstream Production Operations by Accenture and SAP, which features a set of solutions, services and training for oil and gas companies’ production operations.

        MyPOV – Having available two product offerings already makes this an announcement with more teeth than the other - often fluffy - ecosystem announcements. Now we will have to see how good a customer traction SAP and Accenture have achieved and what other products maybe announced (at Sapphire?) soon.

        Overall POV

        This is not the usual go to market partnership announcement we have seen in the enterprise software space since decades. There are real product available now, the offering will be put in the market by dedicated sales and marketing efforts and should be – given the administrative investments – be easier to procure.

        But it is not an exclusive arrangement from either side – regardless Accenture deserves kudos for striking the first of these kind of partnerships. SAP gets a validation point for HANA – and a (much needed) vertical product story for the cloud. On the flip side this means that we will have to watch carefully how SAP will be building out its own vertical enterprise automation plans. Ultimately SAP and Accenture have to show that they can sell these solutions beyond their shared install base, as the ultimate proof point of the attractiveness of their offering…

        Lastly let’s not forget the customers, for who this should be a win. Anything that integrates SIs and products closer, in an easier to procure way – is good for customers.  Non Accenture / SAP customers take note and look for similar partnerships most likely coming sooner than later.

        More from me on the SI business:

        • Is the existing SI business being travolged? Read here

      older | 1 | .... | 4 | 5 | (Page 6) | 7 | 8 | .... | 31 | newer