Can EU customers rely on US Cloud Providers?

05.08.2014 by Martin Kuppinger

The recent US court decision has added to the concerns of EU customers (and of other regions such as APAC) regarding the use of Cloud services from US-based providers. The decision orders Microsoft to turn over a customer’s emails stored in Ireland to the US government. The decision required the company to hand over any data it controlled, regardless of where it was stored.

While the judge has temporarily suspended the order from taking effect to allow Microsoft time to appeal to the 2nd US Circuit Court of Appeals, it remains, like the sword of Damocles, hanging atop of the US Cloud Service Providers (CSPs).

The decision further increases the uncertainty many customers feel regarding the Cloud, and is the latest since the Snowden revelations. So let’s look at the facts behind the FUD (fear, uncertainty, doubt).

In fact, the most important issue of the Cloud is control, not location. There have been critics against many of the current regulations focusing on the location instead of control. When appropriate security controls are in place, why should it make a difference whether data is stored in an EU datacenter or in an US datacenter? The location argument is somewhat invalid anyhow given the fact that data might be routed through other locations, based on how the IP protocol stack works. This caused the recent discussion about an EU Cloud.

However, if control is the better concept in these days of the Internet and the Cloud, the court decision has some logic. The alternative – it is about location, not about control – would in fact mean: A US criminal can hide data simply by storing it outside the US in the Cloud.

Notably, the recent US court decision (still subject to appeal) does not provide blanket access to data held. In this case it appears that the data is related to criminal activity. It is common in virtually all legislations, that data can be seized by law enforcement if they have suspicion that a crime has been committed.

However, there is a risk that your data could legally be seized by law enforcement in a non EU country (e.g. the US, Russia, etc.) on suspicion of an act that is not a crime in your country and which may not have been committed in the country wishing to seize it. There have been a number of contentious example of UK citizens being extradited to the US for these kinds of reason.

The differences in laws and legal system between various countries and court decisions, such as the recent one, do not make it easier for EU customers to trust non-EU Cloud Providers. In fact, uncertainty seems to increase, not decrease. Waiting for harmonization of legislation or trade agreements such as (TTIP Transatlantic Trade and Investment Partnership) is not an answer.

Organizations today are in a situation where on one hand business wants new types of IT services, some only available from the Cloud. On the other hand, there is this uncertainty about what can be done or not.

The only thing organizations can (and must) do is to manage this uncertainty in the same way as for other kinds of risks. Businesses are experienced in deciding which risks to take. This starts with a structured approach to Cloud Service Provider selection, involving not only IT but also procurement and legal. It includes legal advice to understand the concrete legal risks. It also includes analyzing the information sensitivity and information protection requirements. In this way, the specific risk of using individual Cloud Service Providers and different deployment models such as public or private Clouds can be analyzed. It transforms uncertainty into a good understanding of the risk being taken.

KuppingerCole’s research around Cloud Assurance and Information Stewardship and our Advisory Services, for instance, can help you with this.

Notably, the frequently quoted answer “let’s just rely on EU CSPs” oversimplifies the challenge. It needs real alternatives and pure play EU offerings. Both are rare. Many EU offerings are not feature-equal or are far more expensive; others are not pure play EU. The same applies for other regions, for sure. Yes, these services must be taken into consideration. But “EU is good, US is bad” is too simple when looking at all aspects. It is better to understand the real risks of both and choose the best way based on this – which might include on-premise IT. The basic answer to the question in the title simply is: “It depends.” The better answer is: “Understand the real risk.”

This article was originally published in the KuppingerCole Analysts’ View Newsletter.


Posted in Cloud | No comments

Extending your Active Directory to the Cloud

01.07.2014 by Martin Kuppinger

Most organizations have a Microsoft Active Directory in place. The Active Directory (or, in short, AD) builds the foundation of their on-premises infrastructure for managing users, performing their primary network authentication and authentication to AD-integrated applications such as Microsoft Exchange Server, and some network infrastructure services including client configuration management based on Group Policies. AD is a purpose-built directory service that is optimized for supporting these requirements. One of the specific capabilities are Group Policies – client management commonly is out-of-scope of directory services. Another example are the sophisticated replication features of AD. These are required to provide (amongst others) seamless authentication and load-balancing of authentication requests and user management.

This works well for the employees and the on-premise IT infrastructure. However, when it comes to external users, things becoming more challenging. While most organizations manage the “long term” externals – the ones who spend a lot of time on-premises, need access to internal IT systems and frequently even have a company e-mail address – in the Active Directory, organizations struggle with managing all the other externals such as employees of business partners with occasional access only to a selected application or customers.

The purpose-built AD is not targeted towards these use cases. On-boarding and off-boarding thousands of employees of an insurance broker or managing the local operators of an airline across the world are not the standard use cases for AD. And what about managing millions of customers that need access to some applications?

There are workarounds, but none of these workarounds is really convincing. These external users might be managed in a separate forest or in a separate domain within an existing forest. They might even be managed within an existing domain (particularly in ADs that follow a single-domain approach), but that makes security management pretty cumbersome. And we do not yet speak about some challenges such as schema changes for specific requirements or the replication issues caused by managing a multitude of users than just the employees in the Active Directory.

The common answer on these challenges is to set up another, separate directory service for external users or customers. Microsoft’s lightweight answer is AD LDS (Active Directory Lightweight Directory Services). Other vendors provide their LDAP (Lightweight Directory Access Protocol) directory servers to manage these users and authenticate them.

But there is another answer now: cloud-based User and Access Management as part of the emerging cloud IAM offerings. Several vendors deliver solutions that allow managing customers and external users in integration with the existing on-premise infrastructure. Microsoft’s own answer in that field is the Azure Active Directory, a cloud-based directory service that it is quite different from the traditional Active Directory. It supports flexible schemas, scales virtually unlimited (Microsoft Office 365 is based on it), and provides functionality that helps managing external users far better than the on-premise Active Directory can do – and potentially better than other on-premise directory services can do. With upcoming extensions, Microsoft will further add capabilities for managing external users.

There are challenges such as synchronizing and/or federating the existing users of AD and other directory services to Azure Active Directory (or other services in that field).

Nevertheless, there are new options now to extend the existing AD to the cloud and to serve new business demand of on-boarding, off-boarding, and managing business partners and customers – delivered by Microsoft and other players in the market. This creates a situation for organizations using AD in which they should start reviewing and rethinking their Active Directory strategy. There are various options for extending the on-premise AD to the cloud, and it is time for defining the future strategy around AD. That future, for most organizations, will be hybrid.

This article was originally published in the KuppingerCole Analysts’ View Newsletter.


SDN is NOT the next big thing. SDCI is.

13.12.2013 by Martin Kuppinger

I have read many predictions recently that SDN (Software Defined Networking) is the next big thing in IT. Wrong. It is not. It is just a small piece in a bigger story. And just looking at SDN is not sufficient.

The next big thing is SDCI – Software Defined Computing Infrastructure. This is about “software-defining” everything. Hardware virtualization – “software defining hardware”, so to speak – is a reality. Software Defined Storage is becoming increasingly popular. SDN is another element. A number of vendors, such as VMware, talk about a Software Defined Cloud Datacenter. I don’t like that term, because the “Cloud” element might be nice from a marketing perspective, but tends to narrow things down to a specific form of Computing Infrastructure. So I will use SDCI for now.

When looking at SDCI versus SDN, claiming that SDN is the next big thing is like inventing a locomotive but no rail infrastructure. It is only about solving a portion of the problem, from a technical, network-centric view.

However, SDCI is far more than that. It is about managing how business services are operated on a flexible computing infrastructure, which must include all elements of this infrastructure. It is about defining the policies for the entire infrastructure. This is an interesting challenge, because it is not about network, storage or other technical policies anymore, but about translating the business policies. Regulatory compliance, security requirements, availability, performance, but also the willingness of business to pay for a certain level of service – all that flows into policies that define how infrastructure is used and how to balance various requirements.

SDCI also will revolutionize security, in particular network security. In dynamic environments, there is no place for traditional firewalls anymore, but there are fantastic new opportunities for securing information. Such infrastructures allow us to manage security consistently across “machines”, storage, and network, in the context of business policies and in the context of identities. Instead of having multiple disparate approaches to security – a little bit of firewall here, a little bit of endpoint security here, some IAM there, etc. – we are heading towards integrated security. This integrated security still can be layered, but it will be layered in an integrated manner, unlike “layered” security today, which means using multiple levels of disparate security where no one really knows how good the combined result really is – just because there is no integration, no consistent view, no consistent policies.

The security aspect is another reason why SDNs for themselves are not what we need. SDNs are about continuing segregation. They allow us to repeat mistakes on a higher level. SDCI allows us to do things better. That’s the reason why SDCI is the next big thing – and it will become a real big thing.


Posted in Cloud | 5 comments

CeBIT – Shareconomy without connectivity?

06.03.2013 by Martin Kuppinger

Yesterday I spent a day at the CeBIT fair, still the world’s largest IT fair. Besides the many interesting meetings I had previously scheduled, I started thinking about the CeBIT “Leitthema” – their “claim of the year”. This year it has been “Shareconomy”. I still do not know what this term shall mean. There is some fuzzy description at the CeBIT homepage, but in contrast to topics like “Cloud” and “Managing Trust” in 2011 and 2012 respectively, Shareconomy – described as “sharing and using information, resources and experience based on new forms of collaboration” – is a very amorphous concept. They then try to associate it with crowd sourcing, smart infrastructures and smart grids, data security, big data, etc.

In fact, I think that there is something behind this rather strange buzzword. Back in September 2012, KuppingerCole hosted an event about the 4Cs: Communication, Collaboration, Content, and Cloud, which was about enabling new ways of collaboration and communication in a secure way. That probably is what the Shareconomy is all about.

When I look at our advisory business, I see another red-hot topic. In German I’d call it “Umgang mit Dritten”, i.e. how to interact with third parties and services provided by these in a consistent, standardized way. That is about Cloud Security, Identity Federation, API Economy and security therein, etc. Opening up the perimeter and supporting business processes that integrate business partners, customers, etc. is highly important. So maybe that is also part of the Shareconomy. For sure, you will be able to learn a lot about this at our upcoming EIC – the real stuff, not the marketing buzz and fuzz. To highlight just some few sessions:

However, the thing that confused me most at CeBIT – in the context of their Shareconomy claim – was the lack of free WiFi. Sharing without connectivity? Or at least sharing without free or affordable connectivity? Will that work? I doubt it. I used my UMTS cards in the notebook and iPad respectively, because I otherwise would have had to pay 30 € for a 4-hour WiFi pass. That is far more even than in the old school hotels that still charge for WiFi. Ridiculous.


Posted in API Economy, Cloud | Comments Off

A totally unsurprising proposal for European cloud initiatives

23.11.2011 by Martin Kuppinger

Today I received a press release of SAP talking about a new study of Roland Berger (a large consulting firm) and SAP with the title “Cloud Computing brings new growth opportunities to – Europe’s IT and communications industry”. It ends with a program consisting of five points, the proposal of Roland Berger and SAP. The points are

  • Define a European legal framework for data protection and data security
  • Define a “European Cloud Gold Standard” as sort of certificate for cloud providers
  • Spend EU money for research and development around innovations in the cloud
  • Support Cloud Computing for medium-sized businesses (based on the already existing programs)
  • Public services and governments should procure cloud services, showing their trust into the cloud offerings (and thus drive others to procure such services as well)

There were some other well-known findings like the fact that IaaS is dominated by some large US companies like Amazon and Microsoft, that the Cloud Computing market will grow, and so on.

But, to be honest: All this is self-evident or already existing. And some aspects are questionnable. Yes, there are growth potentials in the cloud. For all IT providers in all regions worldwide. We all know that for years. There is an existing European legal framework for data protection, at least to some extent. There is room for improvement, but it isn’t missing (and the report claims that EU standards for data protection are missing, which is just wrong). A worldwide accepted standard for cloud services is required – good and strong certifications. But the question isn’t about that we need it but about how this could look like and how this could be granular enough for quick and efficient procurement processes. Innovation: Great thing. The EU is investing in that for years. And yes, it helps, but innovation comes from a lot of different sources.

When looking at the last two points, the medium-sized businesses and the government procuring cloud services, this shows a fundamental misunderstanding: Cloud Computing is just another deployment model. The reason to move to a cloud service is that this might be the most appropriate way to procure a service (instead of on-premise production). But there is no reason for the government to move to the cloud as long as these services aren’t better than on-premise services – better in a complex sense, taking all factors like functionality, price, risk ratings, security, availability, and so on into account. The same is true for medium-sized businesses.

At the end of the day it is about providing services to the customer which are good enough. Certifications, standards, and legal frameworks will help. But the main point still is about providing the better service, not about complaining about the limiting factors. By the way: It would also have been worth to mention that the EC Privacy Directive will undergo significant changes next year, beyond what it provides today around data protection.


Posted in Cloud | Comments Off

SAML, SCIM – and what about authorization?

16.11.2011 by Martin Kuppinger

Cloud Computing is just another delivery model for IT services. However, due to the specifics of cloud services like multi-tenancy and many others, requirements sometimes are even higher than for on-premise services. One of these requirements in well-architected IT environments and for well-architected applications is the ability to externalize security. That includes relying on external directories for administering and authenticating users, e.g. on Identity Providers. It might include the capability of “cloud provisioning”, e.g. receiving changes of users – even while I clearly favor federation as loosely coupled approach over provisioning. It should include the support for external logs, event monitoring, and so on – unfortunately that appears to be a topic where noone is really working on.

And it should include the capability of managing authorizations in cloud services based on centrally (on-premise or using a cloud service – but centrally and not per cloud service!) managed policies. There is limited value in federating users and than doing all the administration work per cloud service using the cloud service’s proprietary management GUIs or APIs. However, authorization is where the problem really starts.

There is a standard for distributed, dynamic authorization management out there: XACML, the eXtensible Access Control Markup Language. It allows to describe the rules. It allows to work with different repositories for identity information (PIPs, Policy Information Points) and other information required for authorizations, it provides interfaces to custom and standard applications, and so on. However, I haven’t seen XACML in the cloud until now. Unfortunately, I also haven’t seen any real alternative to XACML.

Some might claim that SAML might do that job. There is the SAML Authorization Decision Query as part of the SAML 2.0 standard. But that leads pretty quickly to SAML/XACML interoperability and things like the SAML 2.0 profile of XACML. In fact, if it is about having a consistent set of policies expressed in a common standard, XACML is what we need. We need to define and manage these policies consistently per organization, not per service. Services should request authorization decisions – at least in an ideal world. However, when looking at the cloud, there comes another aspect into play: Performance. Performance is a general issue when externalizing authorization decisions. For cloud services which have to ask many different authorization “engines”, it is an even bigger issue. And there is the issue of latency, which is a factor in cloud environments due to the geographical distances you might find there.

Thus, while XACML is fine for defining policies, the interesting question is: Should cloud services ask external authorization engines per authorization decision? Or is it the better way to update the relevant XACML policies at the cloud service and do authorization decisions there? However, then we will still need a way for efficiently accessing the PIPs for other attributes required to perform the authorization decision.

I don’t have the full answer. However I’m convinced that XACML is a key element for authorization in the cloud, given that it is the standard for externalizing authorization decisions. But it might need some enhancements to optimally work for cloud security as well. It definitely will need improved security architectures for cloud services themselves to externalize authorization decisions and to rely on centrally managed policies. And it definitely needs some thinking about the overall security architecture for cloud services. So I’m looking forward to comments on this post – maybe I’ve missed something and everything is there; maybe this initiates some enhancements to standards. I don’t know but I’m really curious.


Data Protection Laws – Location or Information?

15.06.2011 by Martin Kuppinger

One of the intensively discussed issues in Cloud Computing is compliance with local data protection and privacy laws. The European laws, for instance, are sort of “location-dependent”. It is much easier to deal with PII (Personally Identifiable Information) within the borders of the EU than outside of that region. That is the reason why many large Cloud Providers build data centers within the EU to support their European customers.

The question which recently came to my mind is: Does it really make sense to focus on location? Shouldn’t we better focus on the information security itself? The target is to avoid abuse of PII and other sensitive information. The laws focus on processing, with a very broad definition of the term “processing”. Processing is allowed only if the providers are following specific rules. However: When we clearly define these rules, when we audit the providers, when we do certification – why should the location really matter?

You could argue that there are regions where you won’t expect the government to respect these rules. You could argue that there are countries like the US where some laws are contradictory to European laws. However, that all could be easily part of the rules defined in the law. There is no logical reason to do it by location. If you look at the way governments in some European countries act I wouldn’t say that location is the best choice for enforcing data protection.

From my perspective it would be a good idea to fundamentally re-think data protection laws and to define protection requirement levels for different types of PII and different types of processing. Then rules for the requirements external (cloud) providers have to fulfill can be defined – independent of the location. If one of these rules is contradictory to the local laws in the country the provider has its data center the result would be the same as today. But overall, we would end up with far more flexibility for the cloud.

However, IT reality is always far ahead of the laws. Thus we probably have to accept that it will take many years until the laws reflect the reality of today’s globally distributed, service-based IT.


Posted in Cloud, Privacy | 1 comment

SCIM – will SPML shortcomings be reinvented?

23.04.2011 by Martin Kuppinger

There is a new initiative driven by Google, salesforce.com, and Ping Identity called SCIM (Simple Cloud Identity Management). It claims to overcome the shortcomings of SPML (Simple Provisioning Markup Language), a standard being around for some 10 years. SPML has the target of being a standard for provisioning information between systems. It is supported by most provisioning and access governance tools, but only few target systems. SAP probably is the most important supporter.

Google, salesforce.com, and others in the cloud don’t support SPML. Thus, provisioning to these systems requires using proprietary APIs, if available at all – Google and salesforce.com provide such APIs, but not every cloud provider does. To overcome this, work on SCIM has started.

The first question however is: Why not use SPML? The main reason might be that SPML is XML-based, not focusing on REST which appears to be the somewhat more efficient (and especially, more accepted) way to implement standards for the cloud. Another might be that SPML is moving forward very slowly, if moving at all. There are many defencencies in SPML, no doubt about that. These start with the limited support by non-IAM-vendors. There are technical limitations as well, including performance issues in large scale deployments and limitations regarding what could be provisioned via SPML.

Nevertheless, I’d like to ask two questions:

  • Wouldn’t it be better to join forces of SPML and SCIM to build a SPML version 3.0 which supports REST as well?
  • If working on a new or improved standard, wouldn’t it make sense to address all relevant use cases? SPML doesn’t today and SCIM is not likely to do, when looking at the information provided today.

The first aspect seems to be more sort of a political issue between different vendors. However, having two standards doesn’t help anyone at the end of the day.

That’s even more true if both standards are too lightweight and don’t cover all the companies need today. When looking at the little piece of SCIM specification published it looks like SCIM will only touch the surface of what is required. The use cases are focused on providing user information to cloud services. However, the topic isn’t identity management, it is identity and access management. The access or entitlement part is the big thing to solve. Dealing with different APIs of different cloud providers for identities is an issue, but it isn’t the biggest one – several vendors (federation, classical on-premise provisioning, cloud provisioning) have addressed this at least for the leading cloud providers.

But what about controlling who is allowed to do what in these services? How to manage entitlements, e.g. group membership, authorization rules, and other things? XACML is a standard which supports this, but again there is little to no support by cloud providers for XACML – like with SPML. Thus, when starting to define a new standard, it shouldn’t be a too simple one, which SCIM appears to be at that point of time. It has one which covers all relevant use cases of identity and access management. There is only limited value in providing user information to a cloud service but still having to enter the proprietary web administration interface (or using some proprietary APIs) to control access for that user, to define groups, roles, policies, and so on.

My conclusion: There should be open standards for identity and access management in the cloud. Building on proprietary services is about repeating errors made before. But a new standard shouldn’t be too limited from the beginning. That, by the way, is one of the reasons I see behind the very limited success of SPML: It was too limited. I remember a conversation with one of the leading people involved in SPML years back from now where I suggested looking at use cases like supporting client lifecycle management solutions, e.g. tools supporting (amongst other features) software deployments. There are vendors today in the client lifecycle management market building custom integrations to HR or provisioning tools today, but not based on SPML – because they have never heard about SPML and because SPML never looked at this use case.

There might be a good reason for an effort like SCIM. But just being a REST-based standard but not really thinking beyond what SPML supported won’t solve the real world problems. Thus I strongly recommend to rethink SCIM and to look at significantly extended use cases.

If someone likes to discuss this with me in person, best is to meet at at EIC in Munich, May 10th to 13th.


Why you should focus on the infrastructure layer

21.04.2011 by Martin Kuppinger

In these days of slowly increasing maturity of Cloud Computing it becomes more and more obvious that and why IT depends on a well thought layer which I tend to simply call “infrastructure”. I have two simple pictures of IT in mind:

  • The somewhat classical model of platform, infrastructure, and software, like found in PaaS, IaaS, and SaaS in the common Cloud Computing meta models. It’s about hardware and other foundational components like operating systems, about the layer between to manage and orchestrate everything, and the applications themselves.
  • Another view consists as well of three layers. The services exposed to the users (i.e. in most cases the business) on top, the service production (either in the public cloud or a private cloud or in non-cloudified IT environments) at the bottom – and a layer in between which again is used for managing and orchestrating everything. Again, this layer might best be called “infrastructure”.

This layer is which connects everything. Thus, efficiency and effectivity of this layers are the foundation of efficiency and effectivity of the entire IT. Optimizing this layer allows to better connect the available services to the business demands. It allows to manage the different layers in the cloud.

When looking at that layer, there are some few key elements:

  • Service Management, e.g. the entire area of procurement, service request management, accounting, availability, performance, and whatever it requires to ensure that the services are delivered as expected
  • Information Security Management, including IAM (Identity and Access Management) and at least IT GRC (Governance, Risk Management, Compliance)
  • Application Infrastructures, e.g. middleware allowing to connect services, to enhance them if required and to do the orchestration

Did I miss important elements? OK, there is the classical IT security, however that’s part of Information Security – the reason we are looking at IT security is to protect information. You might add some other elements, however I tend to keep this model simple.

To me it appears to be more important to look at the dependencies of the three services. Information Security and Service Management have to work hand in hand, to ensure that access to services is restricted and controlled. Applications and Information Security are tightly related – think about how to build secure apps. And applications are, at the end of the day, nothing else than services which have to be managed.

I personally believe that starting with such a model and outlining the blueprint for your future IT definitely helps in separating the important from the less important things and to focus on building an IT ecosystem in your organization which is stable and works with whatever you plan to do in the Cloud.

See you at EIC 2011 in Munich, May 10th to 13th.


What I would expect from capacity MANAGEMENT

24.11.2010 by Martin Kuppinger

These days I talked with one vendor about his news around capacity management. Capacity management is used as a term to describe products which are used for optimizing workloads in data centers and to make the best use of the resources within these data centers. Honestly, I didn’t do a deep dive into that specific area before, and maybe I’m a green on advanced IT topics – but when vendors are promising capacity management, I’d expect that to be somewhat active. However, I’ve learned that it’s about reporting and analytics – even in association with terms like dynamic, continuous, proactive or whatever. I’ve checked several other offerings in the meantime and it appears to be common understanding amongst these vendors, that management ends with some nice reports in the space of capacity management.

I would have used the terms capacity reporting or capacity analytics instead, but I know that vendor marketing tends to “beautify” their stories a little.

So what are my expectations on capacity management? The issue is obvious – it is complex to manage the workloads in the increasingly virtualized and complex data centers. And it is becoming even more complex when we have different “clouds”, private and public, where we might put these workloads. Thus we need to support that we can optimize this – a dynamic or continuous capacity management is key. However, I’d at least expect to have a little more than just the information that there is something to be optimized. To achieve a fully automated management for sure will be a long journey, requiring a strong and valid policy model to ensure that nothing goes wrong. It will also require a tight integration with Business Service Management to ensure that the specifics of business processes like high workloads at the end of the month/year are met. It will even require that financial aspects (costs of internal and external cloud resources) and governance aspects (which data and workloads is allowed to be placed where due to governance/compliance/security constraints?) are met. In the meantime, automatically created “jobs” for the virtualization, server, and application management tools to easily make the proposed changes reality would be a first step.

I’m looking forward to see when capacity management becomes reality. There is definitely a value in today’s solutions. But the value could me significantly bigger than it is.


Services
© 2014 Martin Kuppinger, KuppingerCole