IBM’s Software Defined Environment

08.04.2014 by Mike Small

In IBM’s view the kinds of IT applications that organizations are creating is changing from internal facing systems to external facing systems.  IBM calls these kinds of systems “systems of record” and “systems of engagement” respectively.  The systems of record represent the traditional applications that ensure that the internal aspects of the business run smoothly and the organization is financially well governed.  The systems of engagement exploit the new wave of technology that is being used by customers and partners and which takes the form of social and mobile computing.  In IBM’s opinion a new approach to IT is needed to cater for this change which IBM calls SDE (Software Defined Environments).

According to IBM these systems of engagement are being developed to enable organizations to get closer to their customers and partners, to better understand their need and to better respond to their issues and concerns.  They are therefore vital to the future of the business.

However the way these systems of engagement are developed, deployed and exploited is radically different to that for systems of record.   The development methodology is incremental and highly responsive to user feedback.  Deployment requires IT infrastructure that can quickly and flexibly respond to use by people outside the organization.  Exploitation of these applications requires the use of emerging technologies like Big Data analytics which can place unpredictable demands on the IT infrastructure.

In response to these demands IBM has a number of approaches; for example in February I wrote about how IBM has been investing billions of dollars in the cloud.  IBM also has offers something it calls SDE (Software Defined Environment).  IBM’s SDE custom-builds business services by leveraging the infrastructure according to workload types, business rules and resource availability. Once these business rules are in place, resources are orchestrated by patterns—best practices that govern how to build, deploy, scale and optimize the services that these workloads deliver.

IBM is also not alone in this approach and others notably VMWare are heading in the same direction.

In the IBM approach – abstracted and virtualized IT infrastructure resources are managed by software via API invocations.   Applications automatically define infrastructure requirements, configuration and Service Level expectations.  The developer, the people deploying the service as well as the IT service provider are all taken into account by the SDE.

This is achieved by the IBM SDE being built on software and standards from the OpenStack Foundation of which IBM is a member.  IBM has added specific components and functionality to OpenStack to fully exploit IBM hardware and software and these include drivers for: IBM storage devices, PowerVM, KVM and IBM network devices.  IBM has also included some IBM “added value” functionality which includes management API additions, scheduler enhancements, management console GUI additions, and a simplified install.  Since the IBM SmartCloud offerings are also based on OpenStack this also makes cloud bursting into the IBM SmartCloud (as well as any other cloud based on OpenStack) easier except where there is a dependency on the added value functionality.

One of the interesting areas is the support provided by the Platform Resource Scheduler for the placement of workloads.  The policies supported make it possible to define that workloads are placed in a wide variety of ways including: pack workload on fewest physical servers or spread across several, load balancing and memory balancing, keep workloads physically close or physically separate.

IBM sees organizations moving to SDEs incrementally rather that in a big bang approach.  The stages they see are virtualization, elastic data scaling, elastic transaction scaling, policy based optimization and finally application aware infrastructure.

In KuppingerCole’s opinion SDCI (Software Defined Computing Infrastructure) is the next big thing.  Martin Kuppinger wrote about this at the end of 2013. IBM’s SDE fits into this model and has the potential to allow end user organizations to make better use their existing IT infrastructure and to provide greater flexibility to meet the changing business needs.  It is good that IBM’s SDE is based on standards; however there is still a risk of lock-in since the standards in this area are incomplete and are still emerging.   My colleague Rob Newby has also written about the changes that are needed for organizations to successfully adopt SDCI.  In addition it will require a significant measure of technical expertise to successful implement in full.

For more information on this subject there are sessions on Software Defined Infrastructure and a Workshop on Negotiating Cloud Standards Jungle at EIC May 12th to 16th in Munich.


Card Clash on the London Underground

07.03.2014 by Mike Small

Recently there have been posters in London Underground stations warning users of Oyster Cards – the Transport for London (TfL) NFC enabled electronic travel wallet – that there is a risk of “card clash”.  These posters warn that they need to keep other contactless NFC payment cards separate from their Oyster Card when they “touch in” on a bus to avoid the risk that the wrong card would be charged.  TfL will be rolling out the ability to use NFC enabled payment cards on the Tube (London Underground), Overground and DLR later in 2014, and this could lead to further problems.  The charges on the London Underground are based on the journey made and the system depends upon the same card “touched in” on a reader at the origin of the journey being “touched out” at the destination.  If a different card is used at each end of the journey both cards are charged the maximum fare.

NFC technology is an important enabling technology for the Internet of Things (IoT) and the vision for the IoT makes bold promises of benefits for individuals and businesses.  These benefits include making life easier for the individual while allowing businesses to be more efficient.  Being charged twice for the same journey doesn’t seem to square with these claims – so what is happening here?

Near Field Communications (NFC) is a set of standards for devices including smartphones and contactless cards that allow a radio frequency connection to be established quickly and simply by bringing two devices close together (within 10cm to 20cm).  NFC standards cover communications protocols and data exchange formats, and are based on existing radio-frequency identification (RFID) standards.

An important aspect of these protocols is singulation. When different NFC devices are in the RF field of a reader, it needs a way to discriminate between them in order to establish single interactions with one or each of them. This is achieved through the singulation protocol, which is usually run at the time the reader starts a new communication session.  During this initial phase each device identifies itself to the reader, communicating an identifier that will be then used by the reader to contact them individually.

At the NFC device protocol level the ability to distinguish between cards is taken care of, so it looks like the problem lies at the application or system level.  The whole system relies on the same card being used on entry and on exit. The technical protection provided by the NFC protocols cannot protect the system if the application does not take account of the possibility for more than one card being detected at either end. In view of the number of passengers entering and leaving the Tube at peak times it is understandable that throughput may need to take priority over flexibility, however getting to grips with details like this will be essential to realize the potential benefits of the Internet of Things.


IBM Bets on the Cloud

04.03.2014 by Mike Small

IBM has recently made a number of major announcements and these are linked.  In December 2013 IBM announced the acquisition of FiberLink a privately held mobile management and security company.  Then on January 23rd it announced that Lenovo plans to acquire IBM’s x86 server business.  Lastly on February 24th it made a series of announcements around its cloud offerings.   These announcements illustrate the changing way in which IT services are being delivered.  This is shifting from an in house model focused on improving the efficiency of internal processes to a more flexible one supporting speed to market, better intelligence and connecting with customers.

In support of these objectives, IBM has been making significant investments over the recent past. These include $2B on the acquisition of SoftLayer and $1.2B on data centre expansion.  Through SoftLayer IBM has now a cloud offering that they understand as being attractive to the new breed of “born on the cloud” businesses that would never previously have seen anything from IBM as relevant to their needs. Through the datacentre investment together with the SoftLayer infrastructure IBM now has data centres in all major geographies and financial centres with plans to expand in the Middle East and Africa in 2015.  IBM has also announced the open beta of a new Platform as a Service offering code named “BlueMix”.

The new platform is an interesting development, which highlights the shift in the kind of software that IBM sees needs to be developed.  This software, from an IBM perspective, is “apps” that must be attractive to end users, be supported on mobile devices and be developed iteratively in response to user feedback.  IBM says that the BlueMix platform is based on Open Standards (IBM is a long-time supporter of OpenStack) and the developer can chose any language runtime or bring their own. Critically, IBM claims that it is easy to connect to on- premises systems plus other public and private clouds.

The acquisition of Fiberlink Communications provides IBM with the MaaS360 product, which will enable IBM to offer cloud-based or on premise mobile device management (MDM), mobile content management (MCM), and mobile application management (MAM) including containerization.

So what does this mean from IBM’s point of view? It is clear that IBM sees that cloud services provide more value to the IBM business than the manufacture of volume servers.  It establishes PaaS as the model for developing and deploying cloud based applications (Note that IBM is a follower in this area as other major cloud service providers have already shifted to this approach).  It emphasizes the importance of mobile device management as a critical aspect of cloud service security.  In effect the weakest link in this user centric model for IT services delivered through apps is the user and her mobile device.

Is the cloud the real future of IT services delivery?  In KuppingerCole’s opinion the cloud is a commoditized variant of managed services delivery like hosting and so it has a place. Standardizing services and enabling elasticity clearly has a value. The cloud provides an opportunity for SMEs to obtain higher quality and more secure IT services than they could provide themselves.  However this is a market with low margins.  For large organizations, the cloud provides value through standardized solutions with lower project cost and moving from Capex to Opex. On the other hand, the cloud currently adds complexity for large organizations in the areas of selection, management, integration, and auditing. Furthermore, in Europe there is grave concern over legal and compliance implications.  Most large organizations are using the cloud in some way or another, but frequently they refrain from using the cloud for business critical applications or regulated data.

What is the real long-term value of apps to organizations – is this a bubble?  In KuppingerCole’s view there is no doubt that apps provide a good way to deliver services to customers, partners and associates through mobile devices (Albeit that the mobile versions commonly are not as good as the desktop ones). Apps also appear to be more targeted towards consuming then working on information and more towards the occasional user then the power user, doing his everyday work.  It is less clear that this approach provides real differentiation – if every organization is doing it then it becomes an additional cost rather than a benefit. And who knows today what the device, user interface, and application paradigm will be in few years from now, in this ever-changing world?

Iterative software development provides an excellent way of capturing user requirements but does it lead to secure software? By analogy – would you build a sky scraper or a commercial airliner this way?  To create robust and secure systems you need to apply good engineering principles from the foundation upwards.

There is logic in IBM’s strategy; there is a growth market in the cloud but most organizations will continue to use a mix of IT services delivery models.  KuppingerCole sees some large organizations using private cloud variants, for services such as SharePoint, from specialized providers who will take care of the legal and compliance concerns.  Here IBM has an advantage since it supports both public and private cloud as well as more traditional models including hosting and on premises managed services.

For more information on best practices for managing cloud and mobile device security register now for the European Identity and Cloud Conference to be held in Munich during May 2014.


Cyber Security for the Citizen and SME

17.01.2014 by Mike Small

This week the UK government launched the Cyber Street programme to improve the cyber security of UK residents and SMEs.  This is complemented by a Cyber to the Citizen initiative from the BCS – (The UK Chartered Institute for IT).

The background to this is the continuing concern that most cyber-crime is opportunistic and could be prevented if people consistently took simple measures.  For example UK GCHQ estimates that 80 per cent of all cybercrime could be prevented by basic cyber hygiene measures, saving the UK economy billions of pounds annually. The campaign aims to increase the percentage of individuals and Small to Medium Enterprises (SMEs) consistently adopting ten or more cyber security behaviours (out of 17 for individuals and 14 for SMEs.

These behaviours include:

  • Install and keep up to date Anti-Virus Software
  • Use strong passwords
  • Only download from trusted websites or organizations
  • Beware of phishing emails
  • Review and protect business information

The study upon which this initiative is based found that only 10% of SMEs consistently applied these behaviours and it is estimated that the Cyber Street programme will provide measureable benefits based on improving this by 5 percentage points.

This fits with KuppingerCole’s view that the main information security risks are still related to human weaknesses.  The root causes can be categorized as malice, misuse and mistake.  Malice includes activity by organized criminals outside of the organization as well as actions by insiders who may hold a grudge or who are being blackmailed.  Misuse is where people with legitimate access to information use that information in inappropriate ways for example through curiosity.  Mistake is where people leak information through inattention or carelessness.

Sometimes it seems that the only people who value your information and are those who want to steal it.  People would not treat money with the same disregard that they treat their IT systems and the data they hold.  Our view is that what is required is better information stewardship which is based on the idea that a good steward takes care of property that is not his or her own. It is important that everyone understands that they individually are responsible for the security of the information they hold, the IT systems they run and the consequences of data being mishandled.

This is an interesting initiative on the part of the UK government supported by the BCS.  We hope that is will be successful in developing a cyber-aware culture and reduce opportunistic cyber-crime.


Cloud Standards – Complex but not a Jungle according to ETSI!

09.12.2013 by Mike Small

In September 2013 the European Commission (EC) published the strategy to “to create single set of rules for cloud computing and increase EU GDP by €160 billion annually by 2020”. This strategy identified a number of key actions one of these being “Cutting through the Jungle of Standards”.   Following a request from the European Commission, the European Telecommunications Standards Institute (ETSI) launched the Cloud Standards Coordination (CSC) initiative.  In November 2013 ETSI published its final report from the CSC initiative.    According to this report “the Cloud Standards landscape is complex but not chaotic and by no means a ‘jungle’”.

The final Report is based on an analysis of over 100 cloud computing use cases.  It starts with a definition of roles and parties involved in cloud computing.  The obvious roles of provider and customer are expanded to include a cloud service partner (who may act as a broker) as well as the government.  Unsurprisingly the use cases involve three common activities: cloud service acquisition, cloud service use and cloud service termination.  These activities are broken down into more detail for a number of specific use cases.  The report identifies around 20 organizations involved in the standardization activities related to cloud computing and around 150 documents.  However at the activity level it finds that seldom more than 2 standards are relevant to any activity.

The report concludes that emerging cloud specific standards are not seeing widespread adoption by cloud providers.  It suggests that cloud standards need to be flexible enough to allow each provider’s unique selling points to be accommodated.  The report identifies the following gaps:

Interoperability – this is a significant concern since vendor lock-in a risk for cloud customers. The report concludes that while management protocols and interfaces, particularly for IaaS, are maturing, management specifications for PaaS and SaaS require more effort. There are many proprietary and open source solutions, but very few, if any standards.

Security and Privacy – these are important areas of concern for cloud customers.  According to the report there are existing security and privacy standards which are helpful in this area but further development of common vocabularies and metrics is needed.  In addition there is a need for further standardization in the areas of accountability and cloud incident management (e.g., related to SLA infringements).

Service Level Agreement:  the main requirement for standardization in relation to Service Level Agreements is the creation of an agreed set of terminology and definitions for Service Level Objectives, and an associated set of metrics for each of these. There is some on-going work in this area, but this needs to be completed and importantly to be adopted by public cloud service providers.

Regulation, Legal and Governance aspects – The legal environment for cloud computing is highly challenging and a key barrier for adoption. Given the global nature of the cloud and its potential to transcend international borders, there is a need for international Framework and Governance, underpinned via global standards.

The area of standards is important to cloud computing and standards will be the key to obtaining the benefits from this model for the delivery of IT services.  In view of this KuppingerCole have undertaken a detailed study of cloud standards and we have identified the standards that are important to the various processes involved in the selection, use and assurance of cloud services from the perspective of a cloud customer.  We have classified these standards in terms of the actions that a cloud customer needs to take.   You can get an overview of this subject area from our recorded webcast:  Negotiating the Cloud Standards and Advice Jungle.  For a more detailed view join the workshop on this subject at EIC in Munich during May 2014.


Smarter Risk

06.12.2013 by Mike Small

According to IBM a consistent way to manage all types of risk is the key to success for financial services organizations.  To support this IBM will be rolling out their Smarter Risk offering during Q1 2014.  Failure to properly manage risk has been alleged to be the cause of the financial crisis and, to force financial services organizations to better manage risk, the regulators around the world are introducing tougher rules.

The underlying causes of the damaging financial crisis can be traced back to the management of risk.  Financial services organizations need to hold capital to protect against the various forms of risk.  The more capital they have to hold to cover existing risks the less the opportunity to use that capital in other ways.  So fully understanding the risks faced is a key factor to organizational success.

According to Gillian Tett in her book Fool’s Gold – the roots of the financial crisis can be traced back to the Exxon Valdez disaster in 1993. To cover the billions of dollars needed for the clean-up Exxon requested a credit line from its bankers J.P. Morgan and Barclays.  The capital needed to cover this enormous credit line required the banks to set aside large amounts of capital.  In order to release this capital J.P. Morgan found a way to sell the credit risk to the European Bank for Reconstruction and Development.  This was one of the earliest credit default swaps and, while this particular one was perfectly understood by all parties, these types of derivatives evolved into things like synthetic collateralized debt obligations (CDOs) which were not properly understood and were to prove to be the undoing.

IBM believes that, in order to better manage risk, financial services organizations need to manage all forms of risk in a consistent way since they all contribute to the ultimate outcome for the business.  These include financial risk, operational risk, fraud and financial crimes, as well as IT security.  The approach they advise is to build trust through better and more timely intelligence, then to create value by taking a holistic view across all the different forms of risk.  The measurement of risks is a complex process and involves many steps based on many sources of data.  Often a problem that is detected at a lower level is not properly understood at a higher level or is lost in the noise.  Incorrect priorities may be assigned to different kinds of risk or the relative value of different kinds of intelligence may be misjudged.

So how does this relate to IT security?  Well security is about ensuring the confidentiality, integrity and availability of information.  In this last week the UK bank RBS suffered a serious outage which led to its customers’ payment cards being declined over a period of several hours.  The reasons for this have not been published but the reputational damage must be great since this is the latest in a series of externally visible IT problems suffered by the bank.  IBM provided an example of how they had used a prototype Predictive Outage Analytics tool on a banking application.  This banking application suffered 10 outages, each requiring over 40 minutes recovery time, over a period of 4 weeks.  Analysing the system monitoring and performance data the IBM team were able to show that these outages could have been predicted well in advance and the costs and reputational damage could have been avoided if appropriate action had been taken sooner.

So in conclusion this is an interesting initiative from IBM.  It is not the first time that IT companies have told their customers that they need to take a holistic view to manage risk and that IT risk is important to the business.  However, as a consequence of the financial crisis, the financial services industry is now subject to a tightening screw of regulation around the management of risk.  Under these circumstances, tools that can help these organizations to understand, explain and justify their treatment of risks are likely to be welcomed.  This holistic approach to the management of risk is not limited to financial organizations and many other kinds of organization could also benefit.  In particular, with the increasing dependence upon cloud computing and the impact of social and mobile on the business, the impact of IT risk has become a very real business issue and needs to be treated as such.


Salesforce Identity

15.10.2013 by Mike Small

Today Salesforce.com announced the general availability of Salesforce Identity – what does this mean for customers and the IAM market?

In some ways this is not a completely new product, Salesforce.com CRM customers have been using much of this for some time.  However what is announced today extends the functionalities of this way beyond what previously existed.  Salesforce Identity provides a web single sign on capability that can be white labelled and which can be based on authentication by Salesforce.com or a number of other trusted identity providers.  The Salesforce.com authentication methods include two factor and risk based approaches as well as user id and password.  It supports the main authentication standards like SAML, OAuth and OpenID as well as third parties like Facebook connect.  It includes an extensible cloud directory and an optional “Salesforce Identity Connect” module. This latter component is built on ForgeRock’s Open Identity Stack and creates a bridge between existing on-premise directory solutions and Salesforce Identity.  Salesforce Identity also includes monitoring and reporting capabilities to allow the development of user activity and compliance reports.

So – is this just a cloud based web access management system?  Should this simply be viewed as being in direct competition with the range of IAM products in the market?  How does this stack up against the Microsoft cloud directory offering?  (See the Blog by Martin Kuppinger: Understanding Azure Active Directory)

As a company Salesforce.com has declared that its vision is to help organizations connect with their customers, partners, associates and devices.  To do this successfully requires an understanding of identities.  The organization needs to be able to uniquely identify a customer, partner or associate irrespective of how they connect – from whatever device and by whatever means.  So this announcement needs to be seen as part of this core vision.  When taken together with the other Salesforce.com developments, acquisitions and partnership it is much more than just another WAM.  It is a key component of a platform for organizations to connect with their customers, partners and associates.  It is an essential component needed to support the other parts of the platform.

So – through this platform – Salesforce.com are seeking to change the way in which identities are managed by organizations.  To alter the perspective away from one focussed on internal IT systems and users to an outward looking one focussed on customers and partners whilst retaining internal control: integrating enterprise identity with CRM.


Avoiding Data Breaches and Managing Big Data

09.10.2013 by Mike Small

Today information is the foundation upon which businesses are built and organizations need to prosper.  However, given its value, information is not treated with sufficient respect by everyone in the organization.  It sometimes seems that the only people that understand the value of information are those who are trying to steal it!

Big data makes this problem worse – in addition to the vast quantity of data from the Internet of Things and social media, so much unstructured information is now being created within organizations.  Who owns this data and who is responsible for its security?

I believe that what is needed is better information stewardship.  Information Stewardship is the holistic concept beyond Information Security which is based on the idea of taking care of to look after property that is not your own.  Information stewardship involves everyone who creates information not just the application owners or the IT service provider.

I will be presenting a session this subject at RSA Europe on October 29th.

You can listen to a short podcast preview of this session.


The Future of the Cloud

03.10.2013 by Mike Small

As a UK member of ISACA as well as an industry analyst I was privileged to participate in a round table on the subject of the future of the cloud and the results of this were published in a supplement to the Guardian newspaper on September 27th.

Here is a summary of my thoughts on this subject:

The cloud is about efficiency and economies of scale.  The successful CSPs (Cloud Service Providers) will be those that can provide value for money to their customers.  For large enterprises the cloud will add complexity by becoming yet another platform to be accommodated.  However for SMEs the cloud can provide the IT services they need but could not run effectively themselves.  For start-ups the cloud provides a low risk opportunity to create new services.

The cloud is having an impact on conventional hosting services by creating what appear to be a cheaper and more easily accessible alternative.  The recent acquisition of SoftLayer by IBM shows that the existing IT hosting organizations and service providers will need to acquire cloud skills and technologies if they are not to lose out. I believe we will see consolidation in the market.

The cloud also provides a challenge and an opportunity for IT solution vendors.  Large CSPs can provide a more stable platform for these solutions than the vendors themselves.  Basing their solution on a cloud platform allows the solution provider to focus on their specific product and skills rather than on the challenges of running an IT service.  It also provides a great opportunity for new solutions to emerge.

However – it is not straightforward for organizations to migrate existing application to the cloud.  This is a limiting factor for the take up of the cloud by enterprises.  I expect that the market in services to help organizations migrate existing applications to the cloud will grow.

The major concerns that most organizations have with the use of the cloud relate to security and compliance.  The large CSPs have taken these concerns on board and, in general, offer IT services and infrastructure that are as secure as, if not more secure than, those which most organizations can provide for themselves.  There has also been a growth in CSPs providing services that are focussed on markets with specific areas of compliance.  However the standard contracts offered by many CSPs still provide limited liability in the event of failure of the service or for security breaches.

For organizations here are keys to successful exploitation of the cloud.  These are:

  1. Understand – at the board level what your business objectives are from exploiting the cloud.
  2. Set constraints – that limit the risks relating to security of your data and compliance with laws and regulations to a level that you are comfortable with.
  3. Trust the CSP but verify that trust through independent certification and monitoring of the service.

Oracle and Salesforce.com announce partnership

28.06.2013 by Mike Small

Oracle and Saleforce.com CEOs, Larry Ellison and Marc Benioff, hosted a briefing call on June 27th to announce a partnership between these two highly successful companies.  What does this partnership cover and what does it mean?

Salesforce.com is built on Oracle technology and so Salesforce is very dependent upon Oracle.  Marc Benioff confirmed that Salesforce has confidence in the latest releases from Oracle including Oracle 12C database, Oracle Linux and Oracle Exadata.  Larry Ellison announced that this partnership will ensure that there will be out of the box integration of Salesforce.com CRM with Oracle HCM and Oracle financial applications.  However there will be no cross selling of each other’s products – each company’s sales force will continue to sell only its own products.

So what are the benefits for customers?  It is very difficult to quantify the benefits but qualitatively the integrations will be more reliable, more secure and have better performance than ad hoc ones.   At the moment organizations have to build their own integration and this is costly to create and costly to maintain.  This partnership should remove these costs and hence is good for the large number of organizations that are using or will decide to use Salesforce.com CRM together with the other Oracle applications.

It looks like Oracle has conceded that organizations which have adopted Saleforce.com CRM will not be persuaded to migrate to Oracle CRM.  At the same time Oracle is assured of a significant continuing revenue stream for its products from Salesforce.com.

Salesforce.com has stated that is aim is to help organizations to get closer to their customers, partners and associates and is transforming itself from a CRM provider to a platform provider.  So it would appear that it is not interested in competing with the other Oracle applications.

This is where it becomes interesting – on the call it was made clear that the partnership does not cover the platform.  Both CEOs described the importance of a platform that will support the explosion of data from networked devices.  However both companies have their own evolving platform to provide a solution for that problem.  So here the competition continues.


Services
© 2014 Mike Small, KuppingerCole