IBM’s Software Defined Environment

08.04.2014 by Mike Small

In IBM’s view the kinds of IT applications that organizations are creating is changing from internal facing systems to external facing systems.  IBM calls these kinds of systems “systems of record” and “systems of engagement” respectively.  The systems of record represent the traditional applications that ensure that the internal aspects of the business run smoothly and the organization is financially well governed.  The systems of engagement exploit the new wave of technology that is being used by customers and partners and which takes the form of social and mobile computing.  In IBM’s opinion a new approach to IT is needed to cater for this change which IBM calls SDE (Software Defined Environments).

According to IBM these systems of engagement are being developed to enable organizations to get closer to their customers and partners, to better understand their need and to better respond to their issues and concerns.  They are therefore vital to the future of the business.

However the way these systems of engagement are developed, deployed and exploited is radically different to that for systems of record.   The development methodology is incremental and highly responsive to user feedback.  Deployment requires IT infrastructure that can quickly and flexibly respond to use by people outside the organization.  Exploitation of these applications requires the use of emerging technologies like Big Data analytics which can place unpredictable demands on the IT infrastructure.

In response to these demands IBM has a number of approaches; for example in February I wrote about how IBM has been investing billions of dollars in the cloud.  IBM also has offers something it calls SDE (Software Defined Environment).  IBM’s SDE custom-builds business services by leveraging the infrastructure according to workload types, business rules and resource availability. Once these business rules are in place, resources are orchestrated by patterns—best practices that govern how to build, deploy, scale and optimize the services that these workloads deliver.

IBM is also not alone in this approach and others notably VMWare are heading in the same direction.

In the IBM approach – abstracted and virtualized IT infrastructure resources are managed by software via API invocations.   Applications automatically define infrastructure requirements, configuration and Service Level expectations.  The developer, the people deploying the service as well as the IT service provider are all taken into account by the SDE.

This is achieved by the IBM SDE being built on software and standards from the OpenStack Foundation of which IBM is a member.  IBM has added specific components and functionality to OpenStack to fully exploit IBM hardware and software and these include drivers for: IBM storage devices, PowerVM, KVM and IBM network devices.  IBM has also included some IBM “added value” functionality which includes management API additions, scheduler enhancements, management console GUI additions, and a simplified install.  Since the IBM SmartCloud offerings are also based on OpenStack this also makes cloud bursting into the IBM SmartCloud (as well as any other cloud based on OpenStack) easier except where there is a dependency on the added value functionality.

One of the interesting areas is the support provided by the Platform Resource Scheduler for the placement of workloads.  The policies supported make it possible to define that workloads are placed in a wide variety of ways including: pack workload on fewest physical servers or spread across several, load balancing and memory balancing, keep workloads physically close or physically separate.

IBM sees organizations moving to SDEs incrementally rather that in a big bang approach.  The stages they see are virtualization, elastic data scaling, elastic transaction scaling, policy based optimization and finally application aware infrastructure.

In KuppingerCole’s opinion SDCI (Software Defined Computing Infrastructure) is the next big thing.  Martin Kuppinger wrote about this at the end of 2013. IBM’s SDE fits into this model and has the potential to allow end user organizations to make better use their existing IT infrastructure and to provide greater flexibility to meet the changing business needs.  It is good that IBM’s SDE is based on standards; however there is still a risk of lock-in since the standards in this area are incomplete and are still emerging.   My colleague Rob Newby has also written about the changes that are needed for organizations to successfully adopt SDCI.  In addition it will require a significant measure of technical expertise to successful implement in full.

For more information on this subject there are sessions on Software Defined Infrastructure and a Workshop on Negotiating Cloud Standards Jungle at EIC May 12th to 16th in Munich.


Smarter Risk

06.12.2013 by Mike Small

According to IBM a consistent way to manage all types of risk is the key to success for financial services organizations.  To support this IBM will be rolling out their Smarter Risk offering during Q1 2014.  Failure to properly manage risk has been alleged to be the cause of the financial crisis and, to force financial services organizations to better manage risk, the regulators around the world are introducing tougher rules.

The underlying causes of the damaging financial crisis can be traced back to the management of risk.  Financial services organizations need to hold capital to protect against the various forms of risk.  The more capital they have to hold to cover existing risks the less the opportunity to use that capital in other ways.  So fully understanding the risks faced is a key factor to organizational success.

According to Gillian Tett in her book Fool’s Gold – the roots of the financial crisis can be traced back to the Exxon Valdez disaster in 1993. To cover the billions of dollars needed for the clean-up Exxon requested a credit line from its bankers J.P. Morgan and Barclays.  The capital needed to cover this enormous credit line required the banks to set aside large amounts of capital.  In order to release this capital J.P. Morgan found a way to sell the credit risk to the European Bank for Reconstruction and Development.  This was one of the earliest credit default swaps and, while this particular one was perfectly understood by all parties, these types of derivatives evolved into things like synthetic collateralized debt obligations (CDOs) which were not properly understood and were to prove to be the undoing.

IBM believes that, in order to better manage risk, financial services organizations need to manage all forms of risk in a consistent way since they all contribute to the ultimate outcome for the business.  These include financial risk, operational risk, fraud and financial crimes, as well as IT security.  The approach they advise is to build trust through better and more timely intelligence, then to create value by taking a holistic view across all the different forms of risk.  The measurement of risks is a complex process and involves many steps based on many sources of data.  Often a problem that is detected at a lower level is not properly understood at a higher level or is lost in the noise.  Incorrect priorities may be assigned to different kinds of risk or the relative value of different kinds of intelligence may be misjudged.

So how does this relate to IT security?  Well security is about ensuring the confidentiality, integrity and availability of information.  In this last week the UK bank RBS suffered a serious outage which led to its customers’ payment cards being declined over a period of several hours.  The reasons for this have not been published but the reputational damage must be great since this is the latest in a series of externally visible IT problems suffered by the bank.  IBM provided an example of how they had used a prototype Predictive Outage Analytics tool on a banking application.  This banking application suffered 10 outages, each requiring over 40 minutes recovery time, over a period of 4 weeks.  Analysing the system monitoring and performance data the IBM team were able to show that these outages could have been predicted well in advance and the costs and reputational damage could have been avoided if appropriate action had been taken sooner.

So in conclusion this is an interesting initiative from IBM.  It is not the first time that IT companies have told their customers that they need to take a holistic view to manage risk and that IT risk is important to the business.  However, as a consequence of the financial crisis, the financial services industry is now subject to a tightening screw of regulation around the management of risk.  Under these circumstances, tools that can help these organizations to understand, explain and justify their treatment of risks are likely to be welcomed.  This holistic approach to the management of risk is not limited to financial organizations and many other kinds of organization could also benefit.  In particular, with the increasing dependence upon cloud computing and the impact of social and mobile on the business, the impact of IT risk has become a very real business issue and needs to be treated as such.


Salesforce Identity

15.10.2013 by Mike Small

Today Salesforce.com announced the general availability of Salesforce Identity – what does this mean for customers and the IAM market?

In some ways this is not a completely new product, Salesforce.com CRM customers have been using much of this for some time.  However what is announced today extends the functionalities of this way beyond what previously existed.  Salesforce Identity provides a web single sign on capability that can be white labelled and which can be based on authentication by Salesforce.com or a number of other trusted identity providers.  The Salesforce.com authentication methods include two factor and risk based approaches as well as user id and password.  It supports the main authentication standards like SAML, OAuth and OpenID as well as third parties like Facebook connect.  It includes an extensible cloud directory and an optional “Salesforce Identity Connect” module. This latter component is built on ForgeRock’s Open Identity Stack and creates a bridge between existing on-premise directory solutions and Salesforce Identity.  Salesforce Identity also includes monitoring and reporting capabilities to allow the development of user activity and compliance reports.

So – is this just a cloud based web access management system?  Should this simply be viewed as being in direct competition with the range of IAM products in the market?  How does this stack up against the Microsoft cloud directory offering?  (See the Blog by Martin Kuppinger: Understanding Azure Active Directory)

As a company Salesforce.com has declared that its vision is to help organizations connect with their customers, partners, associates and devices.  To do this successfully requires an understanding of identities.  The organization needs to be able to uniquely identify a customer, partner or associate irrespective of how they connect – from whatever device and by whatever means.  So this announcement needs to be seen as part of this core vision.  When taken together with the other Salesforce.com developments, acquisitions and partnership it is much more than just another WAM.  It is a key component of a platform for organizations to connect with their customers, partners and associates.  It is an essential component needed to support the other parts of the platform.

So – through this platform – Salesforce.com are seeking to change the way in which identities are managed by organizations.  To alter the perspective away from one focussed on internal IT systems and users to an outward looking one focussed on customers and partners whilst retaining internal control: integrating enterprise identity with CRM.


Avoiding Data Breaches and Managing Big Data

09.10.2013 by Mike Small

Today information is the foundation upon which businesses are built and organizations need to prosper.  However, given its value, information is not treated with sufficient respect by everyone in the organization.  It sometimes seems that the only people that understand the value of information are those who are trying to steal it!

Big data makes this problem worse – in addition to the vast quantity of data from the Internet of Things and social media, so much unstructured information is now being created within organizations.  Who owns this data and who is responsible for its security?

I believe that what is needed is better information stewardship.  Information Stewardship is the holistic concept beyond Information Security which is based on the idea of taking care of to look after property that is not your own.  Information stewardship involves everyone who creates information not just the application owners or the IT service provider.

I will be presenting a session this subject at RSA Europe on October 29th.

You can listen to a short podcast preview of this session.


The Future of the Cloud

03.10.2013 by Mike Small

As a UK member of ISACA as well as an industry analyst I was privileged to participate in a round table on the subject of the future of the cloud and the results of this were published in a supplement to the Guardian newspaper on September 27th.

Here is a summary of my thoughts on this subject:

The cloud is about efficiency and economies of scale.  The successful CSPs (Cloud Service Providers) will be those that can provide value for money to their customers.  For large enterprises the cloud will add complexity by becoming yet another platform to be accommodated.  However for SMEs the cloud can provide the IT services they need but could not run effectively themselves.  For start-ups the cloud provides a low risk opportunity to create new services.

The cloud is having an impact on conventional hosting services by creating what appear to be a cheaper and more easily accessible alternative.  The recent acquisition of SoftLayer by IBM shows that the existing IT hosting organizations and service providers will need to acquire cloud skills and technologies if they are not to lose out. I believe we will see consolidation in the market.

The cloud also provides a challenge and an opportunity for IT solution vendors.  Large CSPs can provide a more stable platform for these solutions than the vendors themselves.  Basing their solution on a cloud platform allows the solution provider to focus on their specific product and skills rather than on the challenges of running an IT service.  It also provides a great opportunity for new solutions to emerge.

However – it is not straightforward for organizations to migrate existing application to the cloud.  This is a limiting factor for the take up of the cloud by enterprises.  I expect that the market in services to help organizations migrate existing applications to the cloud will grow.

The major concerns that most organizations have with the use of the cloud relate to security and compliance.  The large CSPs have taken these concerns on board and, in general, offer IT services and infrastructure that are as secure as, if not more secure than, those which most organizations can provide for themselves.  There has also been a growth in CSPs providing services that are focussed on markets with specific areas of compliance.  However the standard contracts offered by many CSPs still provide limited liability in the event of failure of the service or for security breaches.

For organizations here are keys to successful exploitation of the cloud.  These are:

  1. Understand – at the board level what your business objectives are from exploiting the cloud.
  2. Set constraints – that limit the risks relating to security of your data and compliance with laws and regulations to a level that you are comfortable with.
  3. Trust the CSP but verify that trust through independent certification and monitoring of the service.

Oracle and Salesforce.com announce partnership

28.06.2013 by Mike Small

Oracle and Saleforce.com CEOs, Larry Ellison and Marc Benioff, hosted a briefing call on June 27th to announce a partnership between these two highly successful companies.  What does this partnership cover and what does it mean?

Salesforce.com is built on Oracle technology and so Salesforce is very dependent upon Oracle.  Marc Benioff confirmed that Salesforce has confidence in the latest releases from Oracle including Oracle 12C database, Oracle Linux and Oracle Exadata.  Larry Ellison announced that this partnership will ensure that there will be out of the box integration of Salesforce.com CRM with Oracle HCM and Oracle financial applications.  However there will be no cross selling of each other’s products – each company’s sales force will continue to sell only its own products.

So what are the benefits for customers?  It is very difficult to quantify the benefits but qualitatively the integrations will be more reliable, more secure and have better performance than ad hoc ones.   At the moment organizations have to build their own integration and this is costly to create and costly to maintain.  This partnership should remove these costs and hence is good for the large number of organizations that are using or will decide to use Salesforce.com CRM together with the other Oracle applications.

It looks like Oracle has conceded that organizations which have adopted Saleforce.com CRM will not be persuaded to migrate to Oracle CRM.  At the same time Oracle is assured of a significant continuing revenue stream for its products from Salesforce.com.

Salesforce.com has stated that is aim is to help organizations to get closer to their customers, partners and associates and is transforming itself from a CRM provider to a platform provider.  So it would appear that it is not interested in competing with the other Oracle applications.

This is where it becomes interesting – on the call it was made clear that the partnership does not cover the platform.  Both CEOs described the importance of a platform that will support the explosion of data from networked devices.  However both companies have their own evolving platform to provide a solution for that problem.  So here the competition continues.


IBM acquires SoftLayer

06.06.2013 by Mike Small

On June 24th, IBM announced that it is to acquire SoftLayer Technologies Inc. and at the same time announced the formation of a new Cloud Services division. Following the close of this acquisition the new division will combine SoftLayer with IBM SmartCloud into a global platform.

So what is special about SoftLayer, why is this important and what does it mean?

SoftLayer Technologies Inc., which was founded in 2005, has over 100,000 devices under management which makes it one of the largest privately held computing infrastructure providers in the world.  SoftLayer has redefined the delivery of cloud computing through its IMS (Infrastructure Management System).  Most cloud providers deliver their compute services using virtualization which keeps the customer one step removed from the underlying hardware.  The CloudLayer IMS, which is around 3 million lines of code, makes it possible to offer raw hardware on demand as a pay per use cloud service.  This has been very attractive for certain kinds to certain kinds of applications, for example gaming, that require very intensive compute or I/O performance.  In effect this system makes it possible to offer anything now available in a data centre as a cloud service.

IBM already is one of the world’s leading cloud service providers—with cloud revenue expected to reach $7 billion annually by the end of 2015.  The acquisition is intended to strengthen IBM’s existing SmartCloud portfolio, by providing a broader range of choices to enterprises moving their workloads to the cloud and is better able to meet the needs those organizations that were founded on or in the cloud.

This acquisition extends the IBM SmartCloud beyond server and storage virtualization by making “bare metal” computing hardware available as a cloud service.  It extends the way in which IBM will be able to deliver the kinds of services that it always delivered through its GTS organization.  It confirms that the cloud is now a mainstream way of delivering IT services.

A word of caution concerns the proprietary nature of SoftLayer’s home grown IMS.  IBM has announced that it intends to expand SoftLayer’s cloud offerings to be OpenStack compliant consistent with its SmartCloud portfolio and historic commitment to open standards.  KuppingerCole believes that organizations using the cloud need to take care to avoid lock-in, and standards provide an important way to ensure this.  It is important that IBM delivers on this commitment.


The Cloud Blossoms in Europe

24.05.2013 by Mike Small

A recent report commission by CA Technologies Inc. looks at the growth of the use of cloud services and the evolving attitudes to the security of these.  This report shows some interesting findings:  For instance:  Europe is catching up with the US, with “38% of the European respondents using cloud for two to three years.” As compared with “55% of the companies in the US have been in the cloud for three or more years”. 

This finding is confirmed by the recent announcement by salesforce.com that they have signed an agreement to establish European data centre in the UK in 2014.  According Marc Benioff, Chairman and CEO, salesforce.com “Europe was salesforce.com’s fastest growing region in our fiscal year 2013, delivering constant currency revenue growth of 38%”.  This same press release includes a forecast from IDC “that Europe’s public cloud software market will grow three times faster than other IT segments, at a CAGR of 30% to reach €23.9 billion by 2017”.

One of the reasons for opening the European data centre given by salesforce.com at their Cloudforce event in London on May 2nd was to answer security concerns of EU governments and organizations relating to the location of their data.  While security concerns remain a key issue for organizations adopting the cloud – the CA Technologies Inc. report discusses the “Security Contradiction”.  According to this report “Ninety-eight percent of enterprises surveyed reported that the cloud met or exceeded their expectations for security”.  At the KuppingerCole European Identity & Cloud Conference held in Munich May 14-17, 2013 one session given by UK organization described that they had moved to the cloud primarily for security reasons.

So – according to these reports – it would seem that the cloud is blossoming in Europe and that customers believe that cloud providers are coming good on their promises around security.  However our advice remains “Trust but Verify” – using the cloud inherently involves an element of trust between the organization using the cloud service and CSP.  This trust must not be unconditional and it is vital to ensure that the trust can be verified.  Organizations still need to have a fast reliable and risk based approach to selecting cloud services as described in our Advisory Note: Selecting your cloud provider – 70742.


Cloud: hope for the best but plan for the worst

15.02.2013 by Mike Small

The past couple of weeks must have been an anxious time for the customers of the outsourcing service run by 2e2 which went into administration on January 29th.  This impacted on a range of organizations including hospitals. The good news today is that the Daisy Group plc. has been appointed to manage the 2e2 Data Centre business.  Organizations are now almost totally dependent on their IT services to operate. It is tempting to think that outsourcing the service absolves you of any responsibility.  This is not the case; an organization using a cloud service is still responsible for the continuity of its business.  The lesson to be learned from this is that while organizations may hope for the best they need to plan for the worst!

A previous example of the need for business continuity planning occurred some years ago.  On the 29th of March of 2004 a fire in tunnels under the city of Manchester had a major impact on telecommunications in the North of England.  Emergency services were hit and mobile phone services disrupted; it was estimated that 130,000 ‘phone lines were affected.  It was not until April 5th of that year that services were back to normal. 

Most organizations depend heavily upon the public telephone network and this network is normally one of the most reliable services so how did they cope with this disruption?  The organizations that had an up to date and tested disaster recovery plan (mostly the large ones) were able to continue their operations.  The small organizations without a plan were badly hit.

Smaller organizations, ones that are not able to afford their own highly resilient data centres, should benefit the most from the resilience offered by the larger cloud service providers.  However, as the example above illustrates, small organizations tend not to have a business continuity plan. In addition not all large organizations have included cloud services in their plan.

Organizations need to determine the business needs for the continuity of any services and data moved to the cloud. They should have policies, processes and procedures in place to ensure that the business requirements for business continuity are met. These policies and procedures involve not only the CSP, but also the customer as well as intermediate infrastructure such as telecommunications and power supplies. They should form part of a complete business continuity plan. Such a plan is part of the operations of what KuppingerCole defines as the “IT Management and Security” layer within IT organization, which is described in the KuppingerCole Scenario: Understanding IT Service and Security Management – 70173.

Here are some points that need to be considered.  For a more detailed view see KuppingerCole Advisory Note: Avoiding Lock-in and Availability Risks in the Cloud – 70171

End to End Infrastructure: Use of the Cloud depends upon the infrastructure to be available from end to end. Not only does the equipment and services at the CSP have to be operational but also the network and the customer equipment need to be available and working. Therefore the Cloud customer, as well as the CSP, needs to ensure the availability of components under their control as well as having appropriate contingency plans.

Service and Data Availability: the data or the service may become unavailable for many reasons.  These include misconfigurations and bugs as well as hardware failures; in addition it may be corrupted or be erased. The CSP may offer several approaches to minimize the risk of data becoming unavailable. However – if timely access to the data is important – ensure that you understand the promised time to recovery. In some circumstances the Cloud customer may need to perform a backup themselves to ensure the required level of business continuity.

Theft or Seizure: The equipment that is used to provide the Cloud service may be stolen or seized by law enforcement because of the activities of co-tenants. These can both lead to a loss of availability of the Cloud service.

Supplier Failure: The cloud service may become unavailable due to the failure of the CSP or of one of their providers. The CSP may go out of business for many reasons ranging from withdrawal from the market through to financial bankruptcy. The CSP may also outsource some of the services that it depends upon and its own supply chain could fail with the failure of one of these providers. Whatever the reasons the impact of this failure on for the cloud customer could be very high.

Power Loss and Natural Disasters: The cloud service provided depends upon the availability of power for systems as well as air-conditioning and other ancillary services for the data centre. An example of this was the lightning strike in Dublin that caused the Amazon and Microsoft Cloud to go offline in 2011.

For more details on best practices for cloud computing attend European Identity & Cloud Conference held in Munich during May 2013.  This will feature a one day workshop on Cloud Provider Assurance.  This workshop uses real life scenarios to lead the participants through the steps necessary to assure that cloud services meet their organization’s business requirements.


Top Ten Tips for Negotiating and Assuring Cloud Services

04.02.2013 by Mike Small

KuppingerCole research confirms that “security, privacy and compliance issues are the major inhibitors preventing organizations from moving to a private cloud.”  Our report on Cloud Provider Assurance provides information in depth on how to manage these issues.  Here is a summary of our top ten tips on negotiating and assuring cloud services.

  1. Consistent IT governance is critical: The cloud is just an alternative way of obtaining IT services and, for most organizations; it will be only one component of the overall complex IT service infrastructure.  IT Governance provides a way to manage, secure, integrate, orchestrate and assure services from diverse sources in a consistent and effective way.
  2. Adopt best practices that are relevant to your organization from one or more of the frameworks or industry standards that are available.  These represent the combined knowledge and experience of the best brains in the industry.  However – be selective – not everything will apply to your organization.  Whatever standards you choose – select a CSP (Cloud Service Provider) that conforms to these standards.
  3. Understand the business requirements for the cloud service – security, privacy and compliance needs follow directly from these.  There is no absolute level of assurance for a cloud service – it needs to be as secure, compliant and cost effective as dictated by the business needs – no more and no less.
  4. Implement a standard process for selecting cloud services: This should enable fast, simple, reliable, standardized, risk-oriented selection of cloud service providers.  Without this there will be a temptation for lines of business to acquire cloud services directly without fully considering the needs for assurance.
  5. Manage Cloud Contracts – beware of CSP standard terms and conditions and consider carefully when to accept them.  If the CSP standard contract satisfies the business needs – that is fine.  If not accept nothing less than you would from your in house IT!  If the CSP won’t negotiate try going via an integrator. 
  6. Classify data and applications in terms of their business impact, the sensitivity of the data and regulatory requirement needs.  This helps the procurement process by setting many of the major parameters for the cloud service and the needs for monitoring and assurance in advance.
  7. Division of responsibilities:  when adopting a cloud service make sure you understand what your responsibilities are as well as those of the CSP.  For example, in most cases under European law, the organization using a cloud service is the “data controller” and remains responsible for personal data held in the cloud. 
  8. Independent Certification of CSP: Look for regular independent certification that the service parameters which are relevant to your business need are being met.  Typically external audits are only performed once or twice per annum and so whilst they are important they only provide snapshots of the service.
  9. Continuous Assurance: To provide continuous assurance of the cloud service, require the CSP to provide regular access to monitoring data that allows you to monitor performance against the service parameters.
  10. Trust but Verify – Using the cloud inherently involves an element of trust between the organization using the cloud service and CSP.  However – this trust must not be unconditional and it is vital to ensure that the trust can be verified.

For more details on best practices for cloud computing attend European Identity & Cloud Conference held in Munich during May 2013.  This will feature a one day workshop on Cloud Provider Assurance.  This workshop uses real life scenarios to lead the participants through the steps necessary to assure that cloud services meet their organization’s business requirements.


Services
© 2014 Mike Small, KuppingerCole