24.10.2014 by Martin Kuppinger
Today, AWS (Amazon Web Services) announced the opening of their new region, located in Frankfurt, Germany. The new facilities actually contain two availability zones, i.e. at least two distinct data centers. AWS can now provide a local solution to customers in mainland Europe, located close to one of the most important Internet hubs. While on one hand this is important from a technical perspective (for instance, with respect to potential latency issues), it is also an important move from a compliance perspective. The uncertainty many customers feel regarding data protection laws in countries such as Germany, as well as the strictness of these regulations, is a major inhibitor preventing the rapid adoption of cloud services.
Having a region in Germany is interesting not only for German customers, but also for cloud customers from other EU countries. AWS claims that, since they provide a pure-play IaaS (Infrastructure as a Service) cloud service that just provides the infrastructure on which the VMs (virtual machines) and customers’ applications reside, their customers have full control over their own data, especially since the AWS Cloud HSM allows the customers to hold their encryption keys securely in the cloud. This service relies on FIPS 140-2 certified hardware and is completely managed by the customer via a secure protocol. Notably, the customer can decide on where his data resides. AWS does not move customer data outside of the region where the customer places it. With the new region, a customer can design a high availability infrastructure within the EU, i.e. Germany and Ireland.
KuppingerCole strongly recommends that customers encrypt their data in the cloud in a way that allows them to retain control over their keys. However, it must be remembered that the responsibility of the data controller stretches from end to end. It is not simply limited to protecting the data held on a cloud server; it must cover the on-premise, network, and end user devices. The cloud service provider (AWS) is responsible for some but not all of this. The data controller needs to be clear about this division of responsibilities and take actions to secure the whole process, which may involve several parties.
Clearly, all this depends on which services the customers are using and the specific use they make of them. Amazon provides comprehensive information around the data compliance issues; additional information around compliance with specific German Laws is also provided (in German). The AWS approach should allow customers to meet the requirements regarding the geographical location of data and, based on the possession of keys, keep it beyond control of foreign law enforcement. However, there is still a grey area: Amazon operates the hardware infrastructure and hypervisors. There was no information available regarding where the management of this infrastructure is located, whether it is fully done from the German data center, 24×7, or whether there is a follow-the-sun or another remote management approach.
Cloud services offer many potential benefits for organizations. These include flexibility to quickly grow and shrink capacity on demand and to avoid costly hoarding of internal IT capacity. In many cases, technical security measures provided by a cloud service provider exceed those provided on-premise, and the factors inhibiting a move to cloud services are more psychological than technical. However, any business needs to be careful to avoid becoming wholly dependent upon one single supplier.
In sum, this move by Amazon reduces the factors inhibiting German and other European customers from moving to the cloud, at least at the IaaS level. For software companies from outside of the EU offering their solutions based on the AWS infrastructure as cloud services, there is less of a change. Moving up the stack towards SaaS, the questions of who is doing data processing, who is in control of data, or whether foreign law enforcement might bypass European data regulations, are becoming more complex.
Hence, we strongly recommend customers to use a standardized risk-based approach for selecting cloud service providers and ensure that this approach is approved by their legal departments and auditors. While the recent Amazon announcement reduces inhibitors, the legal aspects of moving to the cloud (or not) still require thorough analysis involving experts from both the IT and legal/audit side.
More information on what to consider when moving to the cloud is provided in Advisory Note: Selecting your cloud provider – 70742 and Advisory Note: Security Organization, Governance, and the Cloud – 71151.
16.10.2014 by Martin Kuppinger
Cloud IAM is moving forward. Even though there is no common understanding of which features are required, we see more and more vendors – both start-ups and vendors from the traditional field of IAM (Identity and Access Management) – entering that market. Aside from providing an alternative to established on-premise IAM/IAG, we also see a number of offerings that focus on adding new capabilities for managing external users (such as business partners and consumers) and their access to Cloud applications – a segment we call Cloud User and Access Management.
There are a number of expectations we have for such solutions. Besides answers on how to fulfill legal requirements regarding data protection laws, especially in the EU, there are a number of other requirements. The ability to manage external users and customers with flexible login schemes and self-registration, inbound federation of business partners and outbound federation to Cloud services, and a Single Sign-On (SSO) experience for users are among these. Another one is integration back to Microsoft Active Directory and other on-premise identity stores. In general, being good in hybrid environments will remain a key success factor and thus a requirement for such solutions in the long run.
One of the vendors that have entered the Cloud IAM market is Centrify. Many will know Centrify as a leading-edge vendor in Active Directory integration of UNIX, Linux, and Apple Macintosh systems. However, Centrify has grown beyond that market for quite a while, now offering both a broader approach to Privilege Management with its Server Suite and a Cloud User and Access Management solution with its User Suite.
In contrast to other players in the Cloud IAM market, Centrify takes a somewhat different approach. On one hand, they go well beyond Cloud-SSO and focus on strong integration with Microsoft Active Directory, including supporting Cloud-SSO via on-premise AD – not a surprise when viewing the company’s history. On the other hand, their primary focus is on the employees. Centrify User Suite extends the reach of IAM not only to the Cloud but also to mobile users.
This makes Centrify’s User Suite quite different from other offerings in the Cloud User and Access Management market. While they provide common capabilities such as SSO to all type of applications, integration with the Active Directory, capabilities for both strong authentication of external users, and provisioning to Cloud/SaaS applications, their primary focus is not on simply extending this to external users. Instead, Centrify puts its focus on extending their reach to supporting both Cloud and Mobile access, provided by a common platform, delivered as a Cloud service.
This approach is unique, but it makes perfect sense for organizations that want to open up their enterprises to both better support mobile users as well as to give easy access to Cloud applications. Centrify has strong capabilities in mobile management, providing a number of capabilities such as MDM (Mobile Device Management), mobile authentication, and integration with Container Management such as Samsung Knox. All mobile access is managed via consistent policies.
Centrify User Suite is somewhat different from the approach other vendors in the Cloud User and Access Management market took. However, it might be the single solution that fits best to the needs of customers, particularly when they are primarily looking at how to enable their employees for better mobile and Cloud access.
14.10.2014 by Martin Kuppinger
The market for Cloud IAM and in particular Cloud User and Access Management – extending the reach of IAM to business partners, consumers, and Cloud applications through a Cloud service – is growing, both with respect to market size and service providers. While there were a number of start-ups (such as Ping Identity, Okta and OneLogin) creating the market, we now see more and more established players entering the field. Vendors such as Microsoft, Salesforce.com or Centrify are already in. Now SAP, one of the heavyweights in the IT market, has recently launched their SAP Cloud Identity Service.
The focus of this new service is managing access for all types of users, their authentication, and Single Sign-On, to on-premise applications, SAP Cloud applications, and 3rd party Cloud services. This includes capabilities such as SSO, user provisioning, self-registration and user invitation, and more. There is also optional support for social logins.
Technically, there is a private instance per tenant running on the SAP Cloud Identity Service, which acts as Identity Provider (IdP) for Cloud services and other SAML-ready SaaS applications, but also as an interface for external user authentication and registration. This connects back to the on-premise infrastructure for accessing SAP systems and other environments, providing also SSO for users already logged in to SAP systems.
With this new offering, SAP is becoming an interesting option in that field. While they do not sparkle with a large number of pre-configured Cloud services – some other players claim to have more than 3,000 Cloud services ready for on-boarding – SAP provides a solid conceptual approach to Cloud IAM, which is strongly tied in all the SAP HANA platform, the SAP HANA Cloud, and the on-premise SAP infrastructures.
This tight integration into SAP environments, together with the fact that SAP provides its own, certified data center infrastructure, plus the fact that it is from SAP (and SAP buyers tend to buy from SAP) makes it a strong contender in the emerging Cloud User and Access Management market.
05.08.2014 by Martin Kuppinger
The recent US court decision has added to the concerns of EU customers (and of other regions such as APAC) regarding the use of Cloud services from US-based providers. The decision orders Microsoft to turn over a customer’s emails stored in Ireland to the US government. The decision required the company to hand over any data it controlled, regardless of where it was stored.
While the judge has temporarily suspended the order from taking effect to allow Microsoft time to appeal to the 2nd US Circuit Court of Appeals, it remains, like the sword of Damocles, hanging atop of the US Cloud Service Providers (CSPs).
The decision further increases the uncertainty many customers feel regarding the Cloud, and is the latest since the Snowden revelations. So let’s look at the facts behind the FUD (fear, uncertainty, doubt).
In fact, the most important issue of the Cloud is control, not location. There have been critics against many of the current regulations focusing on the location instead of control. When appropriate security controls are in place, why should it make a difference whether data is stored in an EU datacenter or in an US datacenter? The location argument is somewhat invalid anyhow given the fact that data might be routed through other locations, based on how the IP protocol stack works. This caused the recent discussion about an EU Cloud.
However, if control is the better concept in these days of the Internet and the Cloud, the court decision has some logic. The alternative – it is about location, not about control – would in fact mean: A US criminal can hide data simply by storing it outside the US in the Cloud.
Notably, the recent US court decision (still subject to appeal) does not provide blanket access to data held. In this case it appears that the data is related to criminal activity. It is common in virtually all legislations, that data can be seized by law enforcement if they have suspicion that a crime has been committed.
However, there is a risk that your data could legally be seized by law enforcement in a non EU country (e.g. the US, Russia, etc.) on suspicion of an act that is not a crime in your country and which may not have been committed in the country wishing to seize it. There have been a number of contentious example of UK citizens being extradited to the US for these kinds of reason.
The differences in laws and legal system between various countries and court decisions, such as the recent one, do not make it easier for EU customers to trust non-EU Cloud Providers. In fact, uncertainty seems to increase, not decrease. Waiting for harmonization of legislation or trade agreements such as (TTIP Transatlantic Trade and Investment Partnership) is not an answer.
Organizations today are in a situation where on one hand business wants new types of IT services, some only available from the Cloud. On the other hand, there is this uncertainty about what can be done or not.
The only thing organizations can (and must) do is to manage this uncertainty in the same way as for other kinds of risks. Businesses are experienced in deciding which risks to take. This starts with a structured approach to Cloud Service Provider selection, involving not only IT but also procurement and legal. It includes legal advice to understand the concrete legal risks. It also includes analyzing the information sensitivity and information protection requirements. In this way, the specific risk of using individual Cloud Service Providers and different deployment models such as public or private Clouds can be analyzed. It transforms uncertainty into a good understanding of the risk being taken.
KuppingerCole’s research around Cloud Assurance and Information Stewardship and our Advisory Services, for instance, can help you with this.
Notably, the frequently quoted answer “let’s just rely on EU CSPs” oversimplifies the challenge. It needs real alternatives and pure play EU offerings. Both are rare. Many EU offerings are not feature-equal or are far more expensive; others are not pure play EU. The same applies for other regions, for sure. Yes, these services must be taken into consideration. But “EU is good, US is bad” is too simple when looking at all aspects. It is better to understand the real risks of both and choose the best way based on this – which might include on-premise IT. The basic answer to the question in the title simply is: “It depends.” The better answer is: “Understand the real risk.”
This article was originally published in the KuppingerCole Analysts’ View Newsletter.
01.07.2014 by Martin Kuppinger
Most organizations have a Microsoft Active Directory in place. The Active Directory (or, in short, AD) builds the foundation of their on-premises infrastructure for managing users, performing their primary network authentication and authentication to AD-integrated applications such as Microsoft Exchange Server, and some network infrastructure services including client configuration management based on Group Policies. AD is a purpose-built directory service that is optimized for supporting these requirements. One of the specific capabilities are Group Policies – client management commonly is out-of-scope of directory services. Another example are the sophisticated replication features of AD. These are required to provide (amongst others) seamless authentication and load-balancing of authentication requests and user management.
This works well for the employees and the on-premise IT infrastructure. However, when it comes to external users, things becoming more challenging. While most organizations manage the “long term” externals – the ones who spend a lot of time on-premises, need access to internal IT systems and frequently even have a company e-mail address – in the Active Directory, organizations struggle with managing all the other externals such as employees of business partners with occasional access only to a selected application or customers.
The purpose-built AD is not targeted towards these use cases. On-boarding and off-boarding thousands of employees of an insurance broker or managing the local operators of an airline across the world are not the standard use cases for AD. And what about managing millions of customers that need access to some applications?
There are workarounds, but none of these workarounds is really convincing. These external users might be managed in a separate forest or in a separate domain within an existing forest. They might even be managed within an existing domain (particularly in ADs that follow a single-domain approach), but that makes security management pretty cumbersome. And we do not yet speak about some challenges such as schema changes for specific requirements or the replication issues caused by managing a multitude of users than just the employees in the Active Directory.
The common answer on these challenges is to set up another, separate directory service for external users or customers. Microsoft’s lightweight answer is AD LDS (Active Directory Lightweight Directory Services). Other vendors provide their LDAP (Lightweight Directory Access Protocol) directory servers to manage these users and authenticate them.
But there is another answer now: cloud-based User and Access Management as part of the emerging cloud IAM offerings. Several vendors deliver solutions that allow managing customers and external users in integration with the existing on-premise infrastructure. Microsoft’s own answer in that field is the Azure Active Directory, a cloud-based directory service that it is quite different from the traditional Active Directory. It supports flexible schemas, scales virtually unlimited (Microsoft Office 365 is based on it), and provides functionality that helps managing external users far better than the on-premise Active Directory can do – and potentially better than other on-premise directory services can do. With upcoming extensions, Microsoft will further add capabilities for managing external users.
There are challenges such as synchronizing and/or federating the existing users of AD and other directory services to Azure Active Directory (or other services in that field).
Nevertheless, there are new options now to extend the existing AD to the cloud and to serve new business demand of on-boarding, off-boarding, and managing business partners and customers – delivered by Microsoft and other players in the market. This creates a situation for organizations using AD in which they should start reviewing and rethinking their Active Directory strategy. There are various options for extending the on-premise AD to the cloud, and it is time for defining the future strategy around AD. That future, for most organizations, will be hybrid.
This article was originally published in the KuppingerCole Analysts’ View Newsletter.
13.12.2013 by Martin Kuppinger
I have read many predictions recently that SDN (Software Defined Networking) is the next big thing in IT. Wrong. It is not. It is just a small piece in a bigger story. And just looking at SDN is not sufficient.
The next big thing is SDCI – Software Defined Computing Infrastructure. This is about “software-defining” everything. Hardware virtualization – “software defining hardware”, so to speak – is a reality. Software Defined Storage is becoming increasingly popular. SDN is another element. A number of vendors, such as VMware, talk about a Software Defined Cloud Datacenter. I don’t like that term, because the “Cloud” element might be nice from a marketing perspective, but tends to narrow things down to a specific form of Computing Infrastructure. So I will use SDCI for now.
When looking at SDCI versus SDN, claiming that SDN is the next big thing is like inventing a locomotive but no rail infrastructure. It is only about solving a portion of the problem, from a technical, network-centric view.
However, SDCI is far more than that. It is about managing how business services are operated on a flexible computing infrastructure, which must include all elements of this infrastructure. It is about defining the policies for the entire infrastructure. This is an interesting challenge, because it is not about network, storage or other technical policies anymore, but about translating the business policies. Regulatory compliance, security requirements, availability, performance, but also the willingness of business to pay for a certain level of service – all that flows into policies that define how infrastructure is used and how to balance various requirements.
SDCI also will revolutionize security, in particular network security. In dynamic environments, there is no place for traditional firewalls anymore, but there are fantastic new opportunities for securing information. Such infrastructures allow us to manage security consistently across “machines”, storage, and network, in the context of business policies and in the context of identities. Instead of having multiple disparate approaches to security – a little bit of firewall here, a little bit of endpoint security here, some IAM there, etc. – we are heading towards integrated security. This integrated security still can be layered, but it will be layered in an integrated manner, unlike “layered” security today, which means using multiple levels of disparate security where no one really knows how good the combined result really is – just because there is no integration, no consistent view, no consistent policies.
The security aspect is another reason why SDNs for themselves are not what we need. SDNs are about continuing segregation. They allow us to repeat mistakes on a higher level. SDCI allows us to do things better. That’s the reason why SDCI is the next big thing – and it will become a real big thing.
06.03.2013 by Martin Kuppinger
Yesterday I spent a day at the CeBIT fair, still the world’s largest IT fair. Besides the many interesting meetings I had previously scheduled, I started thinking about the CeBIT “Leitthema” – their “claim of the year”. This year it has been “Shareconomy”. I still do not know what this term shall mean. There is some fuzzy description at the CeBIT homepage, but in contrast to topics like “Cloud” and “Managing Trust” in 2011 and 2012 respectively, Shareconomy – described as “sharing and using information, resources and experience based on new forms of collaboration” – is a very amorphous concept. They then try to associate it with crowd sourcing, smart infrastructures and smart grids, data security, big data, etc.
In fact, I think that there is something behind this rather strange buzzword. Back in September 2012, KuppingerCole hosted an event about the 4Cs: Communication, Collaboration, Content, and Cloud, which was about enabling new ways of collaboration and communication in a secure way. That probably is what the Shareconomy is all about.
When I look at our advisory business, I see another red-hot topic. In German I’d call it “Umgang mit Dritten”, i.e. how to interact with third parties and services provided by these in a consistent, standardized way. That is about Cloud Security, Identity Federation, API Economy and security therein, etc. Opening up the perimeter and supporting business processes that integrate business partners, customers, etc. is highly important. So maybe that is also part of the Shareconomy. For sure, you will be able to learn a lot about this at our upcoming EIC – the real stuff, not the marketing buzz and fuzz. To highlight just some few sessions:
However, the thing that confused me most at CeBIT – in the context of their Shareconomy claim – was the lack of free WiFi. Sharing without connectivity? Or at least sharing without free or affordable connectivity? Will that work? I doubt it. I used my UMTS cards in the notebook and iPad respectively, because I otherwise would have had to pay 30 € for a 4-hour WiFi pass. That is far more even than in the old school hotels that still charge for WiFi. Ridiculous.
23.11.2011 by Martin Kuppinger
Today I received a press release of SAP talking about a new study of Roland Berger (a large consulting firm) and SAP with the title “Cloud Computing brings new growth opportunities to – Europe’s IT and communications industry”. It ends with a program consisting of five points, the proposal of Roland Berger and SAP. The points are
- Define a European legal framework for data protection and data security
- Define a “European Cloud Gold Standard” as sort of certificate for cloud providers
- Spend EU money for research and development around innovations in the cloud
- Support Cloud Computing for medium-sized businesses (based on the already existing programs)
- Public services and governments should procure cloud services, showing their trust into the cloud offerings (and thus drive others to procure such services as well)
There were some other well-known findings like the fact that IaaS is dominated by some large US companies like Amazon and Microsoft, that the Cloud Computing market will grow, and so on.
But, to be honest: All this is self-evident or already existing. And some aspects are questionnable. Yes, there are growth potentials in the cloud. For all IT providers in all regions worldwide. We all know that for years. There is an existing European legal framework for data protection, at least to some extent. There is room for improvement, but it isn’t missing (and the report claims that EU standards for data protection are missing, which is just wrong). A worldwide accepted standard for cloud services is required – good and strong certifications. But the question isn’t about that we need it but about how this could look like and how this could be granular enough for quick and efficient procurement processes. Innovation: Great thing. The EU is investing in that for years. And yes, it helps, but innovation comes from a lot of different sources.
When looking at the last two points, the medium-sized businesses and the government procuring cloud services, this shows a fundamental misunderstanding: Cloud Computing is just another deployment model. The reason to move to a cloud service is that this might be the most appropriate way to procure a service (instead of on-premise production). But there is no reason for the government to move to the cloud as long as these services aren’t better than on-premise services – better in a complex sense, taking all factors like functionality, price, risk ratings, security, availability, and so on into account. The same is true for medium-sized businesses.
At the end of the day it is about providing services to the customer which are good enough. Certifications, standards, and legal frameworks will help. But the main point still is about providing the better service, not about complaining about the limiting factors. By the way: It would also have been worth to mention that the EC Privacy Directive will undergo significant changes next year, beyond what it provides today around data protection.
Posted in Cloud
| Comments Off
16.11.2011 by Martin Kuppinger
Cloud Computing is just another delivery model for IT services. However, due to the specifics of cloud services like multi-tenancy and many others, requirements sometimes are even higher than for on-premise services. One of these requirements in well-architected IT environments and for well-architected applications is the ability to externalize security. That includes relying on external directories for administering and authenticating users, e.g. on Identity Providers. It might include the capability of “cloud provisioning”, e.g. receiving changes of users – even while I clearly favor federation as loosely coupled approach over provisioning. It should include the support for external logs, event monitoring, and so on – unfortunately that appears to be a topic where noone is really working on.
And it should include the capability of managing authorizations in cloud services based on centrally (on-premise or using a cloud service – but centrally and not per cloud service!) managed policies. There is limited value in federating users and than doing all the administration work per cloud service using the cloud service’s proprietary management GUIs or APIs. However, authorization is where the problem really starts.
There is a standard for distributed, dynamic authorization management out there: XACML, the eXtensible Access Control Markup Language. It allows to describe the rules. It allows to work with different repositories for identity information (PIPs, Policy Information Points) and other information required for authorizations, it provides interfaces to custom and standard applications, and so on. However, I haven’t seen XACML in the cloud until now. Unfortunately, I also haven’t seen any real alternative to XACML.
Some might claim that SAML might do that job. There is the SAML Authorization Decision Query as part of the SAML 2.0 standard. But that leads pretty quickly to SAML/XACML interoperability and things like the SAML 2.0 profile of XACML. In fact, if it is about having a consistent set of policies expressed in a common standard, XACML is what we need. We need to define and manage these policies consistently per organization, not per service. Services should request authorization decisions – at least in an ideal world. However, when looking at the cloud, there comes another aspect into play: Performance. Performance is a general issue when externalizing authorization decisions. For cloud services which have to ask many different authorization “engines”, it is an even bigger issue. And there is the issue of latency, which is a factor in cloud environments due to the geographical distances you might find there.
Thus, while XACML is fine for defining policies, the interesting question is: Should cloud services ask external authorization engines per authorization decision? Or is it the better way to update the relevant XACML policies at the cloud service and do authorization decisions there? However, then we will still need a way for efficiently accessing the PIPs for other attributes required to perform the authorization decision.
I don’t have the full answer. However I’m convinced that XACML is a key element for authorization in the cloud, given that it is the standard for externalizing authorization decisions. But it might need some enhancements to optimally work for cloud security as well. It definitely will need improved security architectures for cloud services themselves to externalize authorization decisions and to rely on centrally managed policies. And it definitely needs some thinking about the overall security architecture for cloud services. So I’m looking forward to comments on this post – maybe I’ve missed something and everything is there; maybe this initiates some enhancements to standards. I don’t know but I’m really curious.
15.06.2011 by Martin Kuppinger
One of the intensively discussed issues in Cloud Computing is compliance with local data protection and privacy laws. The European laws, for instance, are sort of “location-dependent”. It is much easier to deal with PII (Personally Identifiable Information) within the borders of the EU than outside of that region. That is the reason why many large Cloud Providers build data centers within the EU to support their European customers.
The question which recently came to my mind is: Does it really make sense to focus on location? Shouldn’t we better focus on the information security itself? The target is to avoid abuse of PII and other sensitive information. The laws focus on processing, with a very broad definition of the term “processing”. Processing is allowed only if the providers are following specific rules. However: When we clearly define these rules, when we audit the providers, when we do certification – why should the location really matter?
You could argue that there are regions where you won’t expect the government to respect these rules. You could argue that there are countries like the US where some laws are contradictory to European laws. However, that all could be easily part of the rules defined in the law. There is no logical reason to do it by location. If you look at the way governments in some European countries act I wouldn’t say that location is the best choice for enforcing data protection.
From my perspective it would be a good idea to fundamentally re-think data protection laws and to define protection requirement levels for different types of PII and different types of processing. Then rules for the requirements external (cloud) providers have to fulfill can be defined – independent of the location. If one of these rules is contradictory to the local laws in the country the provider has its data center the result would be the same as today. But overall, we would end up with far more flexibility for the cloud.
However, IT reality is always far ahead of the laws. Thus we probably have to accept that it will take many years until the laws reflect the reality of today’s globally distributed, service-based IT.