A Haven of Trust in the Cloud?

11.11.2014 by Mike Small

In September a survey was published in Dynamic CISO that showed that “72% of Businesses Don’t Trust Cloud Vendors to Obey Data Protection Laws and Regulations”.  Given this lack of trust by their customers what can cloud service vendors do?

When an organization stores data on its own computers, it believes that it can control who can access that data. This belief may be misplaced given the number of reports of data breaches from on premise systems; but most organizations trust themselves more than they trust others.  When the organization stores data in the cloud, it has to trust the cloud provider, the cloud provider’s operations staff and the legal authorities with jurisdiction over the cloud provider’s computers. This creates many serious concerns about moving applications and data to the cloud and this is especially true in Europe and in particular in geographies like Germany where there are very strong data protections laws.

One approach is to build your own cloud where you have physical control over the technology but you can exploit some of the flexibility that a cloud service provides. This is the approach that is being promoted by Microsoft.  In October Microsoft in conjunction with Dell announced their “Cloud Platform System”.  This is effectively a way for an organization to deploy Dell servers running the Microsoft Azure software stack on premise.  Using this platform, an organization can build and deploy on premise applications that are Azure cloud ready.  At the same time it can see for itself what goes on “under the hood”.  Then, when the organization has built enough trust, or when it needs more capacity it can easily extend the existing workload in to the cloud.   This approach is not unique to Microsoft – other cloud vendors also offer products that can be deployed on premise where there are specific needs.

In the longer term Microsoft researchers are working to create what is being described as a “Haven in the Cloud”.  This was described in a paper at the 11th USENIX Symposium on Operating Systems Design and Implementation.  In this paper, Baumann and his colleagues offer a concept they call “shielded execution,” which protects the confidentiality and the integrity of a program, as well as the associated data from the platform on which it runs—the cloud operator’s operating system, administrative software, and firmware. They claim to have shown for the first time that it is possible to store data and perform computation in the cloud with equivalent trust to local computing.

The Haven prototype uses the hardware protection proposed in Intel’s Software Guard Extensions (SGX)—a set of CPU instructions that can be used by applications to isolate code and data securely, enabling protected memory and execution. It addresses the challenges of executing unmodified legacy binaries and protecting them from a malicious host.  It is based on “Drawbridge” another piece of Microsoft research that is a new kind of virtual-machine container.

The question of trust in cloud services remains an important inhibitor to their adoption. It is good to see that vendors are taking these concerns seriously and working to provide solutions.  Technology is an important component of the solution but it is not, in itself sufficient.  In general computers do not breach data by themselves; human interactions play an important part.  The need for cloud services to support better information stewardship as well as for cloud service providers to create an information stewardship culture is also critical to creating trust in their services.  From the perspective of the cloud service customer my advice is always trust but verify.


CESG Draft Cloud Security Principles and Guidelines

27.09.2014 by Mike Small

UK CESG, the definitive voice on the technical aspects of Information Security in UK Government, has published draft versions of guidance for “public sector organizations who are considering using cloud services for handling OFFICIAL information”. (Note that the guidelines are still at a draft stage (BETA) and the CESG is requesting comments).  There are already many standards that exist or are being been developed around the security of cloud services (see: Executive View: Cloud Standards Cross Reference – 71124) so why is this interesting?

Firstly there is an implied prerequisite that the information being held or processed has being classified as OFFICIAL. KuppingerCole advice is very clear; the first step to cloud security is to understand the risk by considering the business impact of loss or compromise of data.  CESG publishes a clear definition for OFFICIAL which is the lowest level of classification and covers “ALL routine public sector business, operations and services”.  So to translate this into business terms the guidelines are meant for cloud services handling the day to day operational services and data.

Secondly the guidelines are simple, clear and concise, and simple is more likely to be successful that complex. There are 14 principles that apply to any organization using cloud services.  The principles are summarized as follows:

  1. Protect data in transit
  2. Protect data stored against tampering, loss, damage or seizure. This includes consideration of legal jurisdiction as well as sanitization of deleted data.
  3. A cloud consumer’s service and data should be protected against the actions of others.
  4. The CSP (service provider) should have and implement a security governance framework.
  5. The CSP should have processes and procedures to ensure the operational security of the service.
  6. CSP staff should be security screened and trained in the security aspects of their role.
  7. Services should be designed and developed in a way that identifies and mitigates security threats.
  8. The service supply chain should support the principles.
  9. Service consumers should be provided with secure management tools for the service.
  10. Access to the service should be limited to authenticated and authorized individuals.
  11. External interfaces should be protected
  12. CSP administration processes should be designed to mitigate risk of privilege abuse.
  13. Consumers of the service should be provided with the audit records they need to monitor their access and the data.
  14. Consumers have responsibilities to ensure the security of the service and their data.

Thirdly there is detailed implementation advice for each of these principles.  As well as providing technical details for each principle it describes six ways in which the customer can obtain assurance.  These assurance approaches can be used in combination to increase confidence.   The approaches are:

  1. Service provider assertions – this relies upon the honesty, accuracy and completeness of the information from the service provider.
  2. Contractual commitment by the service provider.
  3. Review by an independent third party to confirm the service provider’s assertions.
  4. Independent testing to demonstrate that controls are correctly implemented and objectives are met in practice. Ideally this and 3 above should be carried out to a recognised standard. (Note that there are specific UK government standards here but for most commercial organizations these standards would include ISO/IEC 27001, SOC attestations to AICPA SSAE No. 16/ ISAE No. 3402 and the emerging CSA Open Certification Framework)
  5. Assurance in the service design – A qualified security architect is involved in the design or review of the service architecture.
  6. Independent assurance in the components of a service (such as the products, services, and individuals which a service uses).

These guidelines provide a useful addition to the advice that is available around the security of cloud services.  They provide a set of simple principles that are easy to understand.  These principles are backed up with detailed technical advice on their implementation and assurance.  Finally they take a risk based approach where the consumer needs to classify the data and services in terms of their business impact.

KuppingerCole has helped major European organizations to successfully understand and manage the real risks associated with cloud computing. We offer research and services to help cloud service providers, cloud security tool vendors, and end user organizations.  To learn more about how we can help your organization, just contact sales@kuppingercole.com).


Microsoft OneDrive file sync problems

01.09.2014 by Mike Small

A number of users of Microsoft’s OneDrive cloud storage system have reported problems on the Microsoft community relating to synchronizing files between devices. So far, I have not seen an official response from Microsoft. This can be very disconcerting so, in the absence of a response from Microsoft, here are some suggestions to affected users. These worked for me but – in the absence of a formal response from Microsoft – I can offer no cast iron guarantees.

What is the problem? It appears that files created on one device are synced to another device in a corrupt state. This only seems to affect Microsoft Office files (Word, Excel, PowerPoint etc.), which have been created or updated since around August 27th. It does not appear to affect other types of files such as .pdf, .jpg and .zip, for example. When the user tries to access the corrupt file, they get a message of the form “We’re sorry, we can’t open the <file> because we found a problem with its contents”.

This problem does not affect every device, but it can be very disconcerting when it happens to you! The good news is that the data appears to be correct on the OneDrive cloud and – if you are careful – you can retrieve it.

Have I got the problem? Here is a simple test that will allow you to see if you have the problem on your device:

  1. Create a simple Microsoft Office file and save it on the local files store of the device. Do not save it on the OneDrive system.
  2. Log onto OneDrive https://onedrive.live.com/ using a browser and upload the file to a folder on your OneDrive.
  3. Check the synced copy of the file downloaded by the OneDrive App onto your device. If the synced file is corrupted, you have the problem!

What can I do? Do not panic – the data seems to be OK on the OneDrive cloud. Here is how I was able to get the data back onto my device:

  1. Log onto OneDrive https://onedrive.live.com/ using a browser and download the file to your device – replace the corrupt copy
  2. Do NOT delete the corrupt file on your device – this will send the corrupt version to the recycle bin. It will also cause the deletion of the good version on other devices.
  3. It is always a good idea to run a complete malware scan on your devices. If you have not done so recently, now is a very good time. I did that but no threats were detected.
  4. Several people including me have followed the advice on how to troubleshoot sync problems published by Microsoft – but this did not work for them or me.
  5. I did a complete factory reset on my Surface RT – this did not help. Many other people have tried this also to no avail.

Is there a work around? I have not yet seen a formal response from Microsoft, so here are some things that all worked for me:

  1. Accept the problem and whenever you find a corrupt file perform a manual download as described above.
  2. Protect your Office files using a password – this caused the files to be encrypted and it appears that password protected files are not corrupted.  In any case KuppingerCole recommends that information held in cloud storage should be encrypted.
  3. Use WinZip to zip files that are being changed. It seems that .zip files are not being corrupted.
  4. Use some other cloud storage system or a USB to share these files.

This example illustrates some of the downsides of using a cloud service. Cloud services are very convenient when they work, but when they don’t work you may have very little control over the process to fix the problem. You are completely in the hands of the CSP (Cloud Service Provider). If you are using a service for business, access to the data you are entrusting to the CSP may be critical to your business operations. One of the contributors to Microsoft support community described how since he was unable to work he was getting no pay and this is a graphic illustration of the problem.

KuppingerCole can offer research, advice and services relating to securely using the cloud. In London of October 7th KuppingerCole will hold a Leadership Seminar on Risk and Reward from the Cloud and the Internet of Things. Attend this seminar to find out how to manage these kinds of problems for your organization.

Update September 3rd, 2014

An update – the program manager at OneDrive (Arcadiy K) responded to the Microsoft Community and apologized.

“We’ve found the cause of the issue and we believe we have made sure that no new files will be affected. Any files you sync moving forward should work fine and you should no longer encounter the corruption issues described in this thread. Please let us know if you find otherwise.”

I have  tested this on my Windows RT 8.1 and on this device I can confirm that it is fixed. Interestingly there have been no Microsoft updates (or any other changes except a virus signature update) to my device.

Microsoft have just announced the rollup update for August 2014.   Under “Fixed issues included in this is:
August 2014 OneDrive reliability update for Windows RT 8.1 and Windows 8.1

Some folk are still having problems trying to clean up the mess from the previous errors. I would advise reading the thread on the support forum for suggestions on how to recover from these.

 


Cloud Provider Assurance

05.08.2014 by Mike Small

Using the cloud involves an element of trust between the consumer and the provider of a cloud service; however, it is vital to verify that this trust is well founded. Assurance is the process that provides this verification. This article summarizes the steps a cloud customer needs to take to assure that cloud a service provides what is needed and what was agreed.

The first step towards assuring a cloud service is to understand the business requirements for it. The needs for cost, compliance and security follow directly from these requirements. There is no absolute assurance level for a cloud service – it needs to be just as secure, compliant and cost effective as dictated by the business needs – no more and no less.

The needs for security and compliance depend upon the kind of data and applications being moved into the cloud. It is important to classify this data and any applications in terms of their sensitivity and regulatory requirement needs. This helps the procurement process by setting many of the major parameters for the cloud service as well as the needs for monitoring and assurance. Look at Advisory Note: From Data Leakage Prevention (DLP) to Information Stewardship – 70587.

Use a standard process for selecting cloud services that is fast, simple, reliable, standardized, risk-oriented and comprehensive. Without this, there will be a temptation for lines of business to acquire cloud services directly without fully considering the needs for security, compliance and assurance. For more information on this aspect see Advisory Note: Selecting your cloud provider – 70742.

Take care to manage the contract with the cloud service provider. An article on negotiating cloud contracts from Queen Mary University of London provides a comprehensive list of the concerns of organizations adopting the cloud and a detailed analysis of cloud contract terms. According to this article, many of the contracts studied provided very limited liability, inappropriate SLAs (Service Level Agreements), and a risk of contractual lock in. See also – Advisory Note: Avoiding Lock-in and Availability Risks in the Cloud – 70171.

Look for compliance with standards; a cloud service may have significant proprietary content and this can also make the costs of changing provider high. Executive View: Cloud Standards Cross Reference – 71124 provides advice on this.

You can outsource the processing, but you can’t outsource responsibility – make sure that you understand how responsibilities are divided between your organization and the CSP. For example, under EU Data Protection laws, the cloud processor is usually the “data processor” and the cloud customer is the “data controller”. Remember that the “data controller” can be held responsible for breaches of privacy by a “data processor”.

Independent certification is the best way to verify the claims made by a CSP. Certification of the service to ISO/IEC 27001 is a mandatory requirement. However, it is important to properly understand that what is certified is relevant to your needs. For a complete description of how to assure cloud services in your organization see Advisory Note: Cloud Provider Assurance – 70586.

This article was originally published in the KuppingerCole Analysts’ View Newsletter.


EU Guidelines for Service Level Agreements for Cloud Computing

03.07.2014 by Mike Small

In a press release on June 26th, the European Commission announced the publication of new guidelines “help EU businesses use the Cloud”.  These guidelines have been developed by a Cloud Select Industry Group as part of the Commission’s European Cloud Strategy to increase trust in these services.  These guidelines cover SLAs (Service Level Agreements) for cloud services.  In KuppingerCole’s opinion these guidelines are a good start but are not a complete answer to the concerns of individuals and businesses choosing to use cloud services.

Cloud services are important as they provide a way for individuals and businesses to access IT applications and infrastructure in a flexible way and without the need for large up front capital investment.   This makes it possible for new businesses to minimize the risk of testing new products and for existing businesses to reduce the cost of running core IT services.  It allows individuals to access a range of IT services for free or at minimal cost.

The cost model for cloud services is based on two pillars: the service is standardized and offered to the customer on a take it or leave it basis, and the cloud service provider can exploit the cost savings that accrue from the massive scale of their service.   In the case of services offered to individuals there is a third pillar that the cloud service provider can exploit or sell information gathered about the individual users in exchange for providing the service.

Since the definition of the service offered is not usually open to negotiation it is important that its definition is clear to enable the potential customer to perform a real comparison between services offered by different providers.  This definition should also be transparent on how the service provider handles and uses data stored in, or collected by the service.  This is especially important because many kinds of data are subject to laws and regulations and the customer needs to be able to verify that the data for which they are responsible is being handled appropriately.  In addition the individual user of a service needs to understand how data collected about them will be used.

These new guidelines specify what a cloud SLA should cover but not what the service level should be.   They provide a detailed vocabulary with definitions of the various terms used in SLAs.  They provide a set of SLOs (Service level Objectives) for different aspects of the service.  Some relevant SLOs are suggested for each of the service aspects and SLOs are provided for the following major areas of a cloud service:

  • The performance of the service including: availability, response, capacity, capability, support and reversibility. This latter aspect covers the processes involved when the service is terminated. This is important since one of the key concerns is the return of a customer’s data when the service ends together with guarantees about the erasure of that data.
  • The security of the service including: its reliability, authentication and authorization, cryptography, incident management, logging and monitoring, auditing and verification, vulnerability management and service governance.
  • Data management including: data classification, data mirroring backup and restore, data lifecycle and data portability. The data lifecycle include an SLO “data deletion type”: this should specify the quality of the data deletion ranging from weak to strong sanitization (such as specified in NIST 800-88) where the data cannot easily be recovered.
  • Personal data protection: this focuses on the cases where the cloud service provider acts as a “data processor” for the customer who is the “data controller”: including codes of conduct and certification mechanisms, data minimisation, use retention and disclosure, openness transparency and notice, accountability, geographic location and intervenability.

These guidelines are a good start but are not a complete answer to the concerns of individuals and businesses choosing to use cloud services.  They provide a common set of areas that a cloud SLA should cover and a common set of terms that can be used.  However the definition of the objectives in a standard way that can be measured still falls short; it still allows too much “wriggle room” for the cloud provider.  A worthwhile document that provides more detailed advice on what to measure in cloud contracts and how to measure it is given in ENISA Procure Secure.

It is good that the guidelines distinguish between the legal contractual aspects and the technical service definition.  However the SLOs cover areas of data privacy where there is an essential overlap because of the legal obligations upon the cloud customer where they are using the cloud service to process data subject to regulations or laws.  Section 6.4 covers the contentious area of disclosure of personal data to law enforcement authorities and suggests the objects should include the number of disclosures made over a period of time as well as the number notified.  This will not be sufficient to moderate the significant concerns of European organizations using non EU based cloud service providers.

KuppingerCole has helped major European organizations to successfully understand and manage the real risks associated with cloud computing.  We offer research and services to help cloud service providers, cloud security tool vendors, and end user organizations.  To learn more about how we can help your organization, just contact sales@kuppingercole.com).


AWS: Great Security but can you Trust a US Owned Cloud Service?

30.05.2014 by Mike Small

Cloud computing provides an unparalleled opportunity for new businesses to emerge and for existing businesses to reduce costs and improve the services to their customer.  However the revelations of Snowden and the continuing disclosure of state sponsored interception and hacking undermine confidence in cloud service providers.  In this environment CSPs need to go the extra mile to prove that their services are trustworthy.

In general there are two kinds of customers that are adopting cloud computing.  The first kind is the so called “born on the cloud” customers who are starting new businesses which depend upon IT but without the need to make large capital investments in IT.  The second is the organizations that are already using IT in house and are creating new IT applications in the cloud and moving existing ones to the cloud.

These two different kinds of customers have a different sets of risks to manage.  For the born on the cloud the biggest risk is whether or not their business will take off, conventional IT security risks are important but not crucial; (although this may prove to be a mistake in the long run.)  However, organizations moving to the cloud may have already invested heavily in IT, to ensure information security, for compliance or to protect intellectual property and, for these organizations, cloud security and governance are critical concerns.  From the announcements it appears that AWS is now working to attract enterprise customers that are moving to the cloud.

At their event in London on April 28th, 2014 AWS produced an impressive list of customers that included start-ups, enterprises and public sector organizations.  What was new was the list of enterprises that were moving their IT entirely to the cloud; these included an Australian bank and a German hotel chain.  To attract and keep these kinds of customer AWS needs to demonstrate the functionality, security and governance of their offering as well as a competitive price.

AWS claims a high level of IT security and governance for their cloud services and these claims are backed by independent certification.   AWS security principles and processes are described in a white paper.  In June 2013, KuppingerCole published an Executive View on this: Amazon Web Services – Security and Assurance – 70779. There are many existing features which AWS offers that are of particular interest to enterprises and these include:

  • The ability to use a dedicated network connection from the enterprise to AWS using standard 802.1q VLANs.
  • A Virtual Private Cloud – a logically isolated section of the AWS Cloud for the enterprise’s AWS resources.
  • Control of access to the enterprise’s AWS resources based on the enterprise Active Directory using Active Directory Federation Services (ADFS)
  • Data encryption using Amazon Cloud HSM – which allows the enterprise to retain control over the encryption keys.
  • Control of the geography in which the enterprise data is held and processed.

Since then AWS have added AWS CloudTrail.  This is a web service that records AWS API calls for your account and delivers log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.  With CloudTrail, you can get a history of AWS API calls for your account, including API calls made via the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation). The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

An organization adopting the cloud needs to balance the risks against the rewards.  Information security and compliance are the main risks that are holding enterprises back from cloud adoption.  AWS claims a high level of security and these claims are backed by independent audits – however there is still the problem of trust.  The revelations by Snowden of the extent to which the NSA was intercepting communications has made many organizations wary of US based cloud services.  The US government unwillingness to permit organizations to publish sufficient data relating to Foreign Intelligence Surveillance Act (FISA) orders added to these concerns.   (However – in January 2014 the Obama administration reached a deal, allowing the disclosure of more information on the customer data companies are compelled to share with the US government, albeit with some delay)

The extent to which nation states are eavesdropping on or hacking into commercial enterprises (US justice department charges Chinese with hacking) has added to this concern.

While this may seem unfair on AWS, many European enterprises are choosing not to put business critical application or confidential data into US managed cloud services.  To address these concerns will be difficult.  AWS CTO Werner Vogels was recently featured in an article in the Guardian newspaper.  In this article he writes “Another core value is putting data protection, ownership, and control, in the hands of cloud users. It is essential that customers own and control their data at all times.”KuppingerCole agrees with this sentiment but cloud service providers will need to go the extra mile to prove that their services, their employees and their infrastructure cannot be suborned by national interests or national agencies.

 


IBM’s Software Defined Environment

08.04.2014 by Mike Small

In IBM’s view the kinds of IT applications that organizations are creating is changing from internal facing systems to external facing systems.  IBM calls these kinds of systems “systems of record” and “systems of engagement” respectively.  The systems of record represent the traditional applications that ensure that the internal aspects of the business run smoothly and the organization is financially well governed.  The systems of engagement exploit the new wave of technology that is being used by customers and partners and which takes the form of social and mobile computing.  In IBM’s opinion a new approach to IT is needed to cater for this change which IBM calls SDE (Software Defined Environments).

According to IBM these systems of engagement are being developed to enable organizations to get closer to their customers and partners, to better understand their need and to better respond to their issues and concerns.  They are therefore vital to the future of the business.

However the way these systems of engagement are developed, deployed and exploited is radically different to that for systems of record.   The development methodology is incremental and highly responsive to user feedback.  Deployment requires IT infrastructure that can quickly and flexibly respond to use by people outside the organization.  Exploitation of these applications requires the use of emerging technologies like Big Data analytics which can place unpredictable demands on the IT infrastructure.

In response to these demands IBM has a number of approaches; for example in February I wrote about how IBM has been investing billions of dollars in the cloud.  IBM also has offers something it calls SDE (Software Defined Environment).  IBM’s SDE custom-builds business services by leveraging the infrastructure according to workload types, business rules and resource availability. Once these business rules are in place, resources are orchestrated by patterns—best practices that govern how to build, deploy, scale and optimize the services that these workloads deliver.

IBM is also not alone in this approach and others notably VMWare are heading in the same direction.

In the IBM approach – abstracted and virtualized IT infrastructure resources are managed by software via API invocations.   Applications automatically define infrastructure requirements, configuration and Service Level expectations.  The developer, the people deploying the service as well as the IT service provider are all taken into account by the SDE.

This is achieved by the IBM SDE being built on software and standards from the OpenStack Foundation of which IBM is a member.  IBM has added specific components and functionality to OpenStack to fully exploit IBM hardware and software and these include drivers for: IBM storage devices, PowerVM, KVM and IBM network devices.  IBM has also included some IBM “added value” functionality which includes management API additions, scheduler enhancements, management console GUI additions, and a simplified install.  Since the IBM SmartCloud offerings are also based on OpenStack this also makes cloud bursting into the IBM SmartCloud (as well as any other cloud based on OpenStack) easier except where there is a dependency on the added value functionality.

One of the interesting areas is the support provided by the Platform Resource Scheduler for the placement of workloads.  The policies supported make it possible to define that workloads are placed in a wide variety of ways including: pack workload on fewest physical servers or spread across several, load balancing and memory balancing, keep workloads physically close or physically separate.

IBM sees organizations moving to SDEs incrementally rather that in a big bang approach.  The stages they see are virtualization, elastic data scaling, elastic transaction scaling, policy based optimization and finally application aware infrastructure.

In KuppingerCole’s opinion SDCI (Software Defined Computing Infrastructure) is the next big thing.  Martin Kuppinger wrote about this at the end of 2013. IBM’s SDE fits into this model and has the potential to allow end user organizations to make better use their existing IT infrastructure and to provide greater flexibility to meet the changing business needs.  It is good that IBM’s SDE is based on standards; however there is still a risk of lock-in since the standards in this area are incomplete and are still emerging.   My colleague Rob Newby has also written about the changes that are needed for organizations to successfully adopt SDCI.  In addition it will require a significant measure of technical expertise to successful implement in full.

For more information on this subject there are sessions on Software Defined Infrastructure and a Workshop on Negotiating Cloud Standards Jungle at EIC May 12th to 16th in Munich.


Card Clash on the London Underground

07.03.2014 by Mike Small

Recently there have been posters in London Underground stations warning users of Oyster Cards – the Transport for London (TfL) NFC enabled electronic travel wallet – that there is a risk of “card clash”.  These posters warn that they need to keep other contactless NFC payment cards separate from their Oyster Card when they “touch in” on a bus to avoid the risk that the wrong card would be charged.  TfL will be rolling out the ability to use NFC enabled payment cards on the Tube (London Underground), Overground and DLR later in 2014, and this could lead to further problems.  The charges on the London Underground are based on the journey made and the system depends upon the same card “touched in” on a reader at the origin of the journey being “touched out” at the destination.  If a different card is used at each end of the journey both cards are charged the maximum fare.

NFC technology is an important enabling technology for the Internet of Things (IoT) and the vision for the IoT makes bold promises of benefits for individuals and businesses.  These benefits include making life easier for the individual while allowing businesses to be more efficient.  Being charged twice for the same journey doesn’t seem to square with these claims – so what is happening here?

Near Field Communications (NFC) is a set of standards for devices including smartphones and contactless cards that allow a radio frequency connection to be established quickly and simply by bringing two devices close together (within 10cm to 20cm).  NFC standards cover communications protocols and data exchange formats, and are based on existing radio-frequency identification (RFID) standards.

An important aspect of these protocols is singulation. When different NFC devices are in the RF field of a reader, it needs a way to discriminate between them in order to establish single interactions with one or each of them. This is achieved through the singulation protocol, which is usually run at the time the reader starts a new communication session.  During this initial phase each device identifies itself to the reader, communicating an identifier that will be then used by the reader to contact them individually.

At the NFC device protocol level the ability to distinguish between cards is taken care of, so it looks like the problem lies at the application or system level.  The whole system relies on the same card being used on entry and on exit. The technical protection provided by the NFC protocols cannot protect the system if the application does not take account of the possibility for more than one card being detected at either end. In view of the number of passengers entering and leaving the Tube at peak times it is understandable that throughput may need to take priority over flexibility, however getting to grips with details like this will be essential to realize the potential benefits of the Internet of Things.


IBM Bets on the Cloud

04.03.2014 by Mike Small

IBM has recently made a number of major announcements and these are linked.  In December 2013 IBM announced the acquisition of FiberLink a privately held mobile management and security company.  Then on January 23rd it announced that Lenovo plans to acquire IBM’s x86 server business.  Lastly on February 24th it made a series of announcements around its cloud offerings.   These announcements illustrate the changing way in which IT services are being delivered.  This is shifting from an in house model focused on improving the efficiency of internal processes to a more flexible one supporting speed to market, better intelligence and connecting with customers.

In support of these objectives, IBM has been making significant investments over the recent past. These include $2B on the acquisition of SoftLayer and $1.2B on data centre expansion.  Through SoftLayer IBM has now a cloud offering that they understand as being attractive to the new breed of “born on the cloud” businesses that would never previously have seen anything from IBM as relevant to their needs. Through the datacentre investment together with the SoftLayer infrastructure IBM now has data centres in all major geographies and financial centres with plans to expand in the Middle East and Africa in 2015.  IBM has also announced the open beta of a new Platform as a Service offering code named “BlueMix”.

The new platform is an interesting development, which highlights the shift in the kind of software that IBM sees needs to be developed.  This software, from an IBM perspective, is “apps” that must be attractive to end users, be supported on mobile devices and be developed iteratively in response to user feedback.  IBM says that the BlueMix platform is based on Open Standards (IBM is a long-time supporter of OpenStack) and the developer can chose any language runtime or bring their own. Critically, IBM claims that it is easy to connect to on- premises systems plus other public and private clouds.

The acquisition of Fiberlink Communications provides IBM with the MaaS360 product, which will enable IBM to offer cloud-based or on premise mobile device management (MDM), mobile content management (MCM), and mobile application management (MAM) including containerization.

So what does this mean from IBM’s point of view? It is clear that IBM sees that cloud services provide more value to the IBM business than the manufacture of volume servers.  It establishes PaaS as the model for developing and deploying cloud based applications (Note that IBM is a follower in this area as other major cloud service providers have already shifted to this approach).  It emphasizes the importance of mobile device management as a critical aspect of cloud service security.  In effect the weakest link in this user centric model for IT services delivered through apps is the user and her mobile device.

Is the cloud the real future of IT services delivery?  In KuppingerCole’s opinion the cloud is a commoditized variant of managed services delivery like hosting and so it has a place. Standardizing services and enabling elasticity clearly has a value. The cloud provides an opportunity for SMEs to obtain higher quality and more secure IT services than they could provide themselves.  However this is a market with low margins.  For large organizations, the cloud provides value through standardized solutions with lower project cost and moving from Capex to Opex. On the other hand, the cloud currently adds complexity for large organizations in the areas of selection, management, integration, and auditing. Furthermore, in Europe there is grave concern over legal and compliance implications.  Most large organizations are using the cloud in some way or another, but frequently they refrain from using the cloud for business critical applications or regulated data.

What is the real long-term value of apps to organizations – is this a bubble?  In KuppingerCole’s view there is no doubt that apps provide a good way to deliver services to customers, partners and associates through mobile devices (Albeit that the mobile versions commonly are not as good as the desktop ones). Apps also appear to be more targeted towards consuming then working on information and more towards the occasional user then the power user, doing his everyday work.  It is less clear that this approach provides real differentiation – if every organization is doing it then it becomes an additional cost rather than a benefit. And who knows today what the device, user interface, and application paradigm will be in few years from now, in this ever-changing world?

Iterative software development provides an excellent way of capturing user requirements but does it lead to secure software? By analogy – would you build a sky scraper or a commercial airliner this way?  To create robust and secure systems you need to apply good engineering principles from the foundation upwards.

There is logic in IBM’s strategy; there is a growth market in the cloud but most organizations will continue to use a mix of IT services delivery models.  KuppingerCole sees some large organizations using private cloud variants, for services such as SharePoint, from specialized providers who will take care of the legal and compliance concerns.  Here IBM has an advantage since it supports both public and private cloud as well as more traditional models including hosting and on premises managed services.

For more information on best practices for managing cloud and mobile device security register now for the European Identity and Cloud Conference to be held in Munich during May 2014.


Cyber Security for the Citizen and SME

17.01.2014 by Mike Small

This week the UK government launched the Cyber Street programme to improve the cyber security of UK residents and SMEs.  This is complemented by a Cyber to the Citizen initiative from the BCS – (The UK Chartered Institute for IT).

The background to this is the continuing concern that most cyber-crime is opportunistic and could be prevented if people consistently took simple measures.  For example UK GCHQ estimates that 80 per cent of all cybercrime could be prevented by basic cyber hygiene measures, saving the UK economy billions of pounds annually. The campaign aims to increase the percentage of individuals and Small to Medium Enterprises (SMEs) consistently adopting ten or more cyber security behaviours (out of 17 for individuals and 14 for SMEs.

These behaviours include:

  • Install and keep up to date Anti-Virus Software
  • Use strong passwords
  • Only download from trusted websites or organizations
  • Beware of phishing emails
  • Review and protect business information

The study upon which this initiative is based found that only 10% of SMEs consistently applied these behaviours and it is estimated that the Cyber Street programme will provide measureable benefits based on improving this by 5 percentage points.

This fits with KuppingerCole’s view that the main information security risks are still related to human weaknesses.  The root causes can be categorized as malice, misuse and mistake.  Malice includes activity by organized criminals outside of the organization as well as actions by insiders who may hold a grudge or who are being blackmailed.  Misuse is where people with legitimate access to information use that information in inappropriate ways for example through curiosity.  Mistake is where people leak information through inattention or carelessness.

Sometimes it seems that the only people who value your information and are those who want to steal it.  People would not treat money with the same disregard that they treat their IT systems and the data they hold.  Our view is that what is required is better information stewardship which is based on the idea that a good steward takes care of property that is not his or her own. It is important that everyone understands that they individually are responsible for the security of the information they hold, the IT systems they run and the consequences of data being mishandled.

This is an interesting initiative on the part of the UK government supported by the BCS.  We hope that is will be successful in developing a cyber-aware culture and reduce opportunistic cyber-crime.


Posted in IT Security, Risk management | Comments Off
Services
© 2014 Mike Small, KuppingerCole