Gemalto feels secure after attack – the rest of the world does not

25.02.2015 by Martin Kuppinger

In today’s press conference regarding the last week’s publications on a possible compromise of SIM cards from Gemalto by the theft of keys the company has confirmed security incidents during the time frame mentioned in the original report. It’s difficult to say, however, whether their other security products have been affected, since significant parts of the attack, especially in the really sensitive part of their network, did not leave any substantial traces. Gemalto therefore makes a conclusion that there were no such attacks.

According to the information published last week, back in 2010 a joint team of NSA and GCHQ agents has carried out a large-scale attack on Gemalto and its partners. During the attack, they have obtained secret keys that are integrated into SIM cards on the hardware level. Having the keys, it’s possible to decrypt mobile phone calls as well as create copies of these SIM cards and impersonate their users on the mobile provider networks. Since Gemalto, according to their own statements, produces 2 billion cards each year, and since many other companies have been affected as well, we are facing a possibility that intelligence agencies are now capable of global mobile communication surveillance using simple and nonintrusive methods.

It’s entirely possible that Gemalto is correct with their statement that there is no evidence for such a theft. Too much time has passed since the attack and a significant part of the logs from the affected network components and servers, which are needed for the analysis of such a complex attack, are probably already deleted. Still, this attack, just like the theft of so called “seeds” from RSA in 2011, makes it clear that manufacturers of security technologies have to monitor and upgrade their own security continuously in order to minimize the risks. Attack scenarios are becoming more sophisticated – and companies like Gemalto have to respond.

Gemalto itself recognizes that more has to be done for security and incident analysis: “Digital security is not static. Today’s state of the art technologies lose their effectiveness over time as new research and increasing processing power make innovative attacks possible. All reputable security products must be re-designed and upgraded on a regular basis”. In other words, one can expect that the attacks were at least partially successful – not necessarily against Gemalto itself, but against their customers and other SIM card manufacturers. There is no reason to believe that new technologies are secure. According to the spokesperson for the company, Gemalto is constantly facing attacks and outer layers of their protection have been repeatedly breached. Even if Gemalto does maintain a very high standard in security, the constant risks of new attack vectors and stronger attackers should not be underestimated.

Unfortunately, no concrete details were given during the press conference, what changes to their security practices are already in place and what are planned, other than a statement regarding continuous improvement of these practices. However, until the very concept of a “universal key”, in this case the encryption key on a SIM card, is fundamentally reconsidered, such keys will remain attractive targets both for state and state-sponsored attackers and for organized crime.

Gemalto considers the risk for the secure part of their infrastructure low. Sensitive information is apparently kept in isolated networks, and no traces of unauthorized access to these networks have been found. However, the fact that there were no traces of attacks does not mean that there were no attacks.

Gemalto has also repeatedly pointed out that the attack has only affected 2G network SIMs. There is, however, no reason to believe that 3G and 4G networks must be safer, especially not against massive attacks of intelligence agencies. Another alarming sign is that, according to Gemalto, certain mobile service providers are still using insecure transfer methods. Sure, they are talking about “rare exceptions”, but it nevertheless means that unsecured channels still exist.

The incident at Gemalto has once again demonstrated that the uncontrolled actions of intelligence agencies in the area of cyber security poses a threat not only to fundamental constitutional principles such as privacy of correspondence and telecommunications, but to the economy as well. The image of companies like Gemalto and thus their business success and enterprise value are at risk from such actions.

Even more problematic is that the knowledge of other attackers is growing with each published new attack vector. Stuxnet and Flame have long been well analyzed. It can be assumed that the intelligence agencies of North Korea, Iran and China, as well as criminal groups have studied them long ago. The act can be compared to leaking of atomic bomb designs, with a notable difference: you do not need plutonium, just a reasonably competent software developer to build your own bomb. Critical infrastructures are thus becoming more vulnerable.

In this context, one should also consider the idea of German state and intelligence agencies to procure zero-day exploits in order to carry out investigations of suspicious persons’ computers. Zero-day attacks are called that way because code to exploit a newly discovered vulnerability is available before the vendor even becomes aware of the problem, because they literally have zero days to fix it. In reality, this means that attackers are able to exploit a vulnerability long before anyone else discovers it. Now, if government agencies are keeping the knowledge about such vulnerabilities to create their own malware, they are putting the public and the businesses in a great danger, because one can safely assume that they won’t be the only ones having that knowledge. After all, why would sellers of such information make their sale only once?

With all due respect for the need for states and their intelligence agencies to respond to the threat of cyber-crime, it is necessary to consider two potential problems stemming from this approach. On one hand, it requires a defined state control over this monitoring, especially in light of the government’s new capability of nationwide mobile network monitoring in addition to already available Internet monitoring. On the other hand, government agencies finally need to understand the consequences of their actions: by compromising the security of IT systems or mobile communications, they are opening a Pandora’s Box and causing damage of unprecedented scale.

Google+

Posted in Security | No comments

Gemalto fühlt sich weiter sicher – der Rest der Welt ist es nicht

25.02.2015 by Martin Kuppinger

In einer Pressekonferenz zu den Veröffentlichungen von vergangener Woche zu einer möglichen Kompromittierung von SIM-Karten von Gemalto durch den Diebstahl von Schlüsseln hat Gemalto heute bekannt gemacht dass es Vorfälle gegeben hat – ob wirklich keine anderen Produkte betroffen waren kann man aber nicht sagen, weil wesentliche Teile des Angriffs, insbesondere in den wirklich sensitiven Teilen des Netzwerks, nicht nachvollziehbar waren. Gemalto zieht daraus den Schluss, dass es keine solchen Angriffe gegeben hat.

Laut den vergangene Woche bekannt gewordenen Informationen haben NSA und GCHQ im Jahr 2010 einen groß angelegten Angriff auf Gemalto und seine Partner durchgeführt. Dabei wurden geheime Schlüssel, die in die SIM-Karten auf Hardware-Ebene integriert sind, erbeutet. Mit diesen Schlüsseln können potentiell Kopien dieser SIM-Karten erzeugt werden. Mit diesen können sich die Geheimdienste in Anrufe aller Mobiltelefone einklinken, die solche SIM-Karten verwenden. Da Gemalto nach eigenen Angaben rund 2 Milliarden solcher SIM-Karten pro Jahr produziert und auch etliche andere Firmen davon betroffen waren, geht es hier um die Möglichkeit, dass Geheimdienste flächendeckend die mobile Kommunikation in einfacher und nicht nachvollziehbarer Weise abhören können.

Es spricht viel dafür, dass Gemalto mit seiner grundsätzlichen Aussage richtig liegt, dass man keinen Nachweis für einen solchen Diebstahl hat. Er liegt zu lange zurück und ein erheblicher Teil der Log-Daten der betroffenen Netzwerkkomponenten und Server, die man zur Analyse eines solchen komplexen Angriffs benötigt, sind vermutlich längst gelöscht. Dieser Angriff macht, genauso wie der Diebstahl von sogenannten „seeds“ bei RSA im Jahr 2011, aber deutlich, dass Hersteller von Sicherheitstechnologien ihre eigene Sicherheit permanent überprüfen und verbessern müssen, um die Risiken zu verringern. Die Angriffsszenarien werden immer ausgefeilter – deshalb müssen auch Firmen wie Gemalto reagieren.

Gemalto wies sogar selbst darauf hin, dass es immer neue Sicherheitsrisiken gibt: „Digitale Sicherheit ist nicht statisch. Der heutige Stand der Technologien verliert ihre Wirksamkeit im Laufe der Zeit, neue Forschungs- und zunehmender Rechenleistung machen innovative Angriffe möglich. Alle seriösen Sicherheitsprodukte müssen neu gestaltet und in regelmäßigen Abständen aktualisiert werden.” Einfach gesagt: Offensichtlich gab es Angriffe und es spricht einiges dafür, dass diese zumindest teilweise erfolgreich waren – nicht unbedingt Gemalto selbst, aber bei Kunden von Gemalto. Es gibt daher keinen Grund anzunehmen, dass neue Technologien sicher sind. Darüber hinaus hat ein Sprecher von Gemalto selbst darauf hingewiesen, dass sie permanent angegriffen werden und zumindest die äußeren Schutzschichten wiederholt durchbruchen werden. Auch wenn Gemalto einen sehr hohen Standard im Bereich Sicherheit pflegt, dürfen die Risiken durch immer neue Angriffsformen und leistungsfähigere Angreifer nicht unterschätzt werden.

Leider wurden auf der Pressekonferenz keine konkreten Aussagen dazu gemacht, ob und in welchem Umfang Änderungen bei den Sicherheitsmaßnahmen bereits vorgenommen oder geplant sind, außer dem Verweis auf eine kontinuierliche Verbesserung dieser Maßnahmen. Grundsätzlich sind aber auch Konzepte zu überdenken, bei denen es solche „Generalschlüssel“ wie in diesem Fall für die Verschlüsselung von Informationen auf SIM-Karten gibt. Denn diese Generalschlüssel sind natürlich ein attraktives Ziel sowohl für staatliche und von Staaten gesponserte Angreifer wie auf für die organisierte Kriminalität.

Gemalto bewertet die Risiken für den sicheren Teil seiner Infrastruktur als gering. Die wirklich sensitiven Informationen fänden sich in isolierten Netzwerken und es habe in den sensitiven Bereichen keine nachvollziehbare Zugriffe gegeben. Dass Angriffe nicht nachvollzogen werden können bedeutet aber nicht, dass sie nicht stattgefunden haben. Es soll laut Gemalto auch keine Risiken für neuere Mobilfunknetze geben. Von dem konkreten Vorfall seien nur 2G-Netze betroffen und die Probleme seien auch primär bei Mobilfunk-Operatoren entstanden. Allerdings bedeutet das nicht, dass 3G- und 4G-Netzwerke wirklich sicher sind.

Bedenklich ist auch, dass es laut Gemalto zu einigen Mobilfunk-Anbietern immer noch unsichere Übertragungsverfahren gibt. Gemalto sprach hier von „rare exceptions“ – was im Umkehrschluss bedeutet, dass es diese weiterhin gibt.

Der Vorfall bei Gemalto zeigt aber einmal mehr auf, dass vom unkontrollierten Handeln von Geheimdiensten im Bereich der Cyber-Sicherheit eine Gefahr nicht nur für fundamentale rechtsstaatliche Prinzipien wie das Post- und Fernmeldegeheimnis ausgeht und das Verhältnis von eigentlich befreundeten Staaten ausgeht – immerhin wurde hier ein französisches Unternehmen mutmaßlich im Auftrag und mit Unterstützung amerikanischer und britischer Geheimdienste angegriffen –  sondern auch für die Wirtschaft. Das Image von Unternehmen wie Gemalto und damit deren geschäftlicher Erfolg und Unternehmenswert werden durch solche Aktionen gefährdet. Gemalto merkt hier selbst zu Recht an, dass das Handeln der Geheimdienste nicht akzeptabel und nachvollziehbar ist.

Viel problematischer ist aber ein anderer Aspekt: Mit jedem bekannt gewordenen neuen Angriffsmuster – und früher oder später wird das meiste bekannt – wächst auch das Wissen anderer Angreifer. Stuxnet und Flame sind längst bestens analysiert. Man kann davon ausgehen, dass die Geheimdienste von Nordkorea, dem Iran oder China längst davon gelernt haben, ebenso wie die organisierte Kriminalität. Das Handeln ist in seiner Qualität vergleichbar mit der Veröffentlichung von Konstruktionsplänen von Atombomben, mit dem Unterschied, dass man kein Plutonium, sondern nur einigermaßen fähige Softwareentwickler zum Bombenbau benötigt. Die kritische Infrastruktur wird damit immer angreifbarer.

In diesem Kontext ist auch die Idee deutscher staatlicher Stellen und Geheimdienste zu bewerten, sich Code für Zero Day-Attacken zu sichern, um damit Nachforschungen auf Computersystemen von verdächtigen Personen durchführen zu können. Zero Day-Attacken bezeichnen Angriffe, bei denen bei bekannt werden einer Schwachstelle in einem Betriebssystem, einem Browser oder einer anderen Software null Tage („zero days“) zur Verfügung stehen, um zu reagieren, weil der Angriffscode bereits verfügbar ist. Faktisch sind Zero Day-Attacken eigentlich solche, bei denen Schwachstellen schon längst genutzt werden, bevor sie von anderen als den Angreifern entdeckt werden. Wenn nun staatliche Stellen das Wissen über solche Schwachstellen nutzen, um eigene Malware für Angriffe zu entwickeln, dann setzen sie die breite Öffentlichkeit und die Wirtschaft einer massiven Gefährdung aus, da natürlich davon ausgegangen werden muss, dass auch andere diese Schwachstellen entdecken. Abgesehen davon: Warum sollte der Verkäufer solcher Informationen diese nur einmal verkaufen oder nutzen?

Bei allem Verständnis für die Notwendigkeit, dass Staaten und ihre Geheimdienste auf die Bedrohung durch Cyber-Kriminalität und Cyber-Angriffe von staatlicher Seite reagieren, muss man doch ein Umdenken in zweierlei Hinsicht einsetzen. Zum einen bedarf es einer definierten staatlichen Kontrolle der Überwachung, gerade dann wenn Staaten wie nun im Mobiltelefonbereich oder schon längst im Internet die Fähigkeit zur flächendeckenden Überwachung haben. Zum anderen müssen aber die involvierten staatlichen Stellen endlich die Konsequenzen ihres Handelns verstehen: Wer die Sicherheit von IT-Systemen oder von mobiler Kommunikation kompromittiert, öffnet die Büchse der Pandora.

Google+

Posted in Security | No comments

Operational Technology: Safety vs. Security – or Safety and Security?

24.02.2015 by Martin Kuppinger

In recent years, the area of “Operational Technology” – the technology used in manufacturing, in Industrial Control Systems (ICS), SCADA devices, etc. – has gained the attention of Information Security people. This is a logical consequence of the digital transformation of businesses as well as concepts like the connected (or even hyper-connected) enterprise or “Industry 4.0”, which describes a connected and dynamic production environment. “Industry 4.0” environments must be able to react to customer requirements and other changes by better connecting them. More connectivity is also seen between industrial networks and the Internet of Things (IoT). Just think about smart meters that control local power production that is fed into large power networks.

However, when Information Security people start talking about OT Security there might be a gap in common understanding. Different terms and different requirements might collide. While traditional Information Security focuses on security, integrity, confidentiality, and availability, OT has a primary focus on aspects such as safety and reliability.

Let’s just pick two terms: safety and security. Safety is not equal to security. Safety in OT is considered in the sense of keeping people from harm, while security in IT is understood as keeping information from harm. Interestingly, if you look up the definitions in the Merriam-Webster dictionary, they are more or less identical. Safety there is defined as “freedom from harm or danger: the state of being safe”, while security is defined as “the state of being protected or safe from harm”. However, in the full definition, the difference becomes clear. While safety is defined as “the condition of being safe from undergoing or causing hurt, injury, or loss”, security is defined as “measures taken to guard against espionage or sabotage, crime, attack, or escape”.

It is a good idea to work on a common understanding of terms first, when people from OT security and IT security start talking. For decades, they were pursuing their separate goals in environments with different requirements and very little common ground. However, the more these two areas become intertwined, the more conflicts occur between them – which can be best illustrated when comparing their views on safety and security.

In OT, there is a tendency to avoid quick patches, software updates etc., because they might result in safety or reliability issues. In IT, staying at the current release level is mandatory for security. However, patches occasionally cause availability issues – which stands in stark contrast to the core OT requirements. In this regard, many people from both sides consider this a fundamental divide between OT and IT: the “Safety vs. Security” dichotomy.

However, with more and more connectivity (even more in the IoT than in OT), the choice between safety and security is no longer that simple. A poorly planned change (even as simple as an antivirus update) can introduce enough risk of disruption of an industrial network that OT experts will refuse even to discuss it: “people may die because of this change”. However, in the long term, not making necessary changes may lead to an increased risk of a deliberate disruption by a hacker. A well-known example of such a disruption was the Stuxnet attack in Iran back in 2007. Another much more recent event occurred last year in Germany, where hackers used malware to get access to a control system of a steel mill, which they then disrupted to such a degree that it could not be shut down and caused massive physical damage (but, thankfully, no injuries or death of people).

When looking in detail at many of the current scenarios for connected enterprises and – in consequence – connected OT or even IoT, this conflict between safety and security isn’t an exception; every enterprise is doomed to face it sooner or later. There is no simple answer to this problem, but clearly, we have to find solutions and IT and OT experts must collaborate much more closely than they are (reluctantly) nowadays.

One possible option is limiting access to connected technology, for instance, by defining it as a one-way road, which enables information flow from the industrial network, but establishes an “air gap” for incoming changes. Thus, the security risk of external attacks is mitigated.

However, this doesn’t appear to be a long-term solution. There is increasing demand for more connectivity, and we will see OT becoming more and more interconnected with IT. Over time, we will have to find a common approach that serves both security and safety needs or, in other words, both OT security and IT security.

Google+

UMA and Life Management Platforms

20.02.2015 by Martin Kuppinger

Back in 2012, KuppingerCole introduced the concept of Life Management Platforms. This concept aligns well with the VRM (Vendor Relationship Management) efforts of ProjectVRM, however it goes beyond in not solely focusing on the customer to vendor relationships. Some other terms occasionally found include Personal Clouds (not a very concrete term, with a number of different meanings) or Personal Data Stores (which commonly lack the advanced features we expect to see in Life Management Platforms).

One of the challenges in implementing Life Management Platforms until now has been the lack of standards for controlling access to personal information and of standard frameworks for enforcing concepts such as minimal disclosure. Both aspects now are addressed.

On one hand, we see technologies such as Microsoft U-Prove and IBM Idemix being ready for practical use, which recently has been demonstrated in an EU-funded project. On the other hand, UMA is close to final, a standard that allows managing authorization for information that is stored centrally. It moves control into the hands of the “data owner”, instead of the service provider.

UMA is, especially in combination with U-Prove and/or Idemix, an enabler for creating Life Management Platforms based on standard and COTS technology. Based on UMA, users can control what happens with their content. They can make decisions on whether and how to share information with others. On the other hand, U-Prove and Idemix allow enforcing minimal disclosure, based on the concepts of what we called “informed pull” and “controlled push”.

Hopefully we will see a growing number of offerings and improvements to existing platforms that make use of the new opportunities UMA and the other technologies provide. As we have written in our research, there is a multitude of promising business models that respect privacy – and not only for business models that destroy privacy. Maybe the release of UMA is the priming for successful Life Management Platform offerings.

Google+

Adaptive Policy-based Access Management (APAM): The Future of Authentication and Authorization

11.02.2015 by Martin Kuppinger

It’s not RBAC vs. ABAC – it’s APAM.

Over the past several years, there have been a lot of discussions around terms such as RBAC (Role Based Access Control), ABAC (Attribute Based Access Control), Dynamic Authorization Management (DAM) and standards such as XACML. Other terms such as RiskBAC (Risk Based Access Control) have been introduced more recently.

Quite frequently, there has been a debate between RBAC and ABAC, as to whether attributes should or must replace roles. However, most RBAC approaches in practice rely on more than purely role (i.e. on other attributes), while roles are a common attribute in ABAC. In practice, it is not RBAC vs. ABAC, but rather a sort of continuum.

However, the main issue in trying to position ABAC as the antipode to RBAC is that attributes vs. roles is not what the discussion should be about. The difference is in how access is granted.

Some years ago, I introduced the term “Dynamic Authorization Management” for what some vendors called “Entitlement Management”, while others used the term of “Policy Management”. This has been about the contrast of doing authorizations based on statically defined entitlements (such as in system that rely on ACLs, i.e. Access Control Lists, e.g. Windows Server) and authorization decisions made at runtime based on policies and context information such as the user, his roles, etc. – in fact a number of attributes.

Even longer ago, the term PBAC had been introduced, With the A in PBAC standing for “admission”, because PBAC was a standard introduced at the network level.

However, you could also argue that systems such as the SAP ERP systems or Windows File Servers do authorizations dynamically, for instance in Windows by comparing ACLs with SIDs contained in the Kerberos token. Nevertheless, the entitlements are set statically. Admittedly, after various discussions with end users, the term “dynamic” appears to not be clear enough for distinguishing various approaches.

While common, static approaches at best translate policies in static entitlements, this step is lacking in what I now will call Adaptive Policy-based Access Management (APAM). And that is what really makes the difference: Policies, applied at runtime to make decisions based on “context” in the broadest sense. Whether these are roles, IP addresses, claims, or whatever – this is the essence of the entire discussion that we have seen going on for years now.

It is not a question of whether RBAC or ABAC is right. It is about moving towards APAM. The advantages of APAM are obvious: APAM by default is a security service, i.e. externalizes security from the applications (theoretically, such a concept might be implemented into applications, but there is little sense in doing so). APAM will automatically reflect policy changes. Policies, if APAM is implemented right, can be expressed in a business-friendly notation. APAM is adaptive, e.g. it takes the context into account. All the aspects we had discussed as advantages for Dynamic Authorization Management logically apply to APAM, because this is just a new term for what KuppingerCole previously named Dynamic Authorization Management. Admittedly, it is a better term.

Google+

UMA in the Enterprise: There’s far more potential for UMA

02.02.2015 by Martin Kuppinger

UMA, the upcoming User Managed Access Protocol, is a profile of OAuth 2.0. The specification itself defines the role of UMA as follows:

“UMA defines how resource owners can control protected-resource access by clients operated by arbitrary requesting parties, where the resources reside on any number of resource servers, and where a centralized authorization server governs access based on resource owner policies. Resource owners configure authorization servers with access policies that serve as asynchronous authorization grants.”

Simply said: UMA allows someone to control access to his data which can reside on other’s servers. As the name “user managed” implies, not the owner of the server but the owner of the resource (commonly some form of data) controls access. As I already wrote in a recent post, there now is at least a standard protocol for enabling privacy and minimal disclosure, by enhancing user control and consent.

Most of the use cases and case studies published by the standards body focus on Business-to-Consumer (B2C) scenarios. However, there is a great potential for Business-to-Business (B2B) and Business-to-Employee (B2E) communication. One example is provided by the UMA working group, which concerns managing API security based on UMA. However, there are numerous other scenarios. All complex information sharing scenarios involving a number of parties, such as complex financial transactions, fall in that scope.

A while ago, we had an interesting use case presented by a customer. The customer organization (organization A) shares data which is held on a cloud service (service C) with partners (partner 1, partner 2). However, the CSP (Cloud Service Provider) is not in charge of authorizations. Every partner in fact is in charge of granting access to “his” resources/data held on that server. Real world, and a perfect fit for UMA.

Thus, I strongly recommend that you look at UMA not only from a privacy and user consent perspective, but also from the perspective of fostering better collaboration between businesses. Without any doubt, UMA is another important step forward in standardization, after the introduction of OAuth 2.0 some time ago. Hopefully, UMA will gain the same widespread adoption as quickly as OAuth 2.0.

Google+

Minimal disclosure becoming reality

21.01.2015 by Martin Kuppinger

This week, the EU-funded project ABC4Trust, led by Prof. Dr. Kai Rannenberg, Goethe University Frankfurt, announced that they successfully implemented two pilot projects. The target of the project has been what Kim Cameron in his Seven Laws of Identity has defined as law #2, “Minimal disclosure for a constrained use”. It also observes law #1, “User control and consent”.

Using Microsoft’s U-Prove technology and IBM’s Idemix technology, the project enables pseudonymity of users based on what they call ABC: Attribute-based credentials. Instead of expecting a broad range of information about users, ABC4Trust focuses on the minimum information required for a specific use case, e.g. the information that someone successfully passed some exams instead of his full name and other personal information or just the fact that someone is above 18 years of age, instead of his full date of birth.

This aligns well with the upcoming UMA standard, a new standard, which is close to finalization. I will publish a post on UMA soon.

So there are working solutions enabling privacy while still confirming the minimum information necessary for a transaction. The biggest question obviously is: Will they succeed? I see strong potential for UMA, however the use cases in reality might be different from the ones being focused on in the development of UMA. I am somewhat skeptical regarding ABC4Trust, unless regulations mandate such solutions. Too many companies are trying to build their business on collecting personal data. ABC4Trust stands in stark contrast to their business models.

Thus, it will need more than academic showcases to verify the real-world potential of these technologies. However, such use cases exist. The concept of Life Management Platforms and more advanced approaches to Personal Data Stores will massively benefit from such technologies – and from standards such as UMA. Both help leveraging new business models that build on enforcing privacy.

Furthermore, ABC4trust shows that privacy and pseudonymity can be achieved. This might be an important argument for future privacy regulations – that privacy is not just theoretical, but can be achieved in reality.

Google+

How CSPs could and should help their EU customers in adopting the Cloud

16.01.2015 by Martin Kuppinger

Many customers, especially in the EU (European Union) and particularly in Germany and some other countries, are reluctant regarding cloud adoption. There are other regions with comparable situations, such as the Middle East or some countries in the APAC region. Particularly public cloud solutions provided by US companies are seen skeptical.

While the legal aspect is not simple, as my colleague Karsten Kinast recently has pointed out, it can be solved. Microsoft, for instance, has contracts that take the specifics of EU data protection regulations into account and provide solutions. Microsoft provides information on this publicly on its website, such as here. This at least minimizes the grey area, even while some challenges, such as pending US court decisions, remain.

There are other challenges such as the traceability of where workloads and data are placed. Again, there are potential solutions for that, as my colleague Mike Small recently explained in his blog.

This raises a question: Why do CSPs struggle with the reluctance of many EU (and other) customers in adopting cloud services, instead of addressing the major challenges?

What the CSPs must do:

  • Find a deployment model that is in conformance with EU (and other) privacy and data protection laws – which is feasible.
  • Adapt the contracts to the specific regional laws and regulations – again, this can be done, as the Microsoft example proves.
  • Evaluate additional solutions such as traceability of workloads and data, as Mike Small has described in his blog post.
  • Define cloud contracts that take customer needs into account, particularly avoiding disruptiveness to the customer’s business. I have blogged about this recently.
  • Educate your customers openly, both regarding the legal and the technical aspects. The more CSPs do a good job on providing contracts and implementations, the faster reluctance will diminish.

There is some technical work to do. There is more work to do on the legal side. And yes, that will cost a CSP money. Their lawyers might even say they will give up some advantages. However, if your advantage is based on a potential disruptiveness to the customer’s business or slow adoption of the cloud services by customers, then the disadvantages might by far outweigh the advantages.

Thus, the recommendation to CSPs is simple: Make this a business decision, not a lawyer decision. Unilateral, not to say unfair, agreements are a business inhibitor. That is a lesson some of the company lawyers of US CSPs still need to learn.

Google+

Posted in Cloud | No comments

Seven Fundamentals for Future Identity and Access Management

11.12.2014 by Martin Kuppinger

Identity and Access Management is changing rapidly. While the traditional focus has been on employees and their access to internal systems, with an emphasis on the HR system as the leading source for identity information, Identity Management has to address a far more complex environment today.

Over the past several years, we have already seen a number of drastic changes triggered by Cloud Computing, Mobile Computing, and Social Computing. Different deployment models and the management of access to Cloud applications, authentication and secure information access of mobile users, and the ever-tighter integration of business partners and customers has, for some time, had a massive impact on the way Identity and Access Management is done.

But these changes are just the tip of the iceberg. Users accessing services through apps, access management for operational IT, and the Internet of Things (or, better, the IoEE as the Internet of Everything and Everyone) with billions of things that all have identities (and belong to someone or something) are three mega-trends that will further change the role of Identity and Access Management.

Traditional concepts for Identity and Access Management that have been focused on the internal IT are no longer sufficient. We still need some of these, but they cover only a fraction of the future scope – and for some organizations already today’s scope – of Identity and Access Management.

Instead of traditional concepts for Identity and Access Management, organizations should define a new view of that topic. The following seven Fundamentals for future Identity and Access Management might help organizations shape their own strategy and roadmap for Identity and Access Management.

Fundamental #1: More than humans – It’s also about Identities of things, devices, services, and apps

Everything has an identity. Whether it is something like a smart meter, one of the various connected elements in connected vehicles, or a device within the realm of wearable computing, everything has an identity. They might require access that has to be managed. They will be accessed from devices through apps, all requiring an identity. Identity and Access Management is no longer about the human accessing a particular system, but about humans, things, devices (which we might consider just being things), services and apps (which again might be considered just a specific type of service) accessing and interfacing with other humans, things, devices, services and apps. That drastically changes the number of identities we have to deal with. It changes authentication. It requires management of relationships between identities. It massively expands the scope of Identity and Access Management.

Fundamental #2: Multiple Identity Providers – We will not manage all identities internally anymore and trust will vary

There is no central directory anymore, neither for humans nor for all the other things and services. We cannot manage millions of customers the same way we manage thousands of employees. Furthermore, many people do not want to re-register again and again with other companies. They want to re-use identities. BYOI (Bring Your Own Identity) is an increasingly established concept. In the future, there will be even more Identity Providers. Trust will vary, and we will need to understand risk and context (see Fundamental #7).

Fundamental #3: Multiple Attribute Providers – There will no longer be a single source of truth and information on identities anymore

There will not only be different Identity Providers, there will also be different Attribute Providers. This is not really new. The HR system never ever was the only source of truth and information about identities. Many attributes never showed up there, and a number of changes always have been triggered by other systems or manually – just think about the process of immediately blocking all access of an employee that has been terminated. This happens first in the Identity and Access Management system, while the lay-off is reflected later in the HR system. But even the “Corporate Directory” that in some organizations is considered as being the single source of truth will not withstand the evolution towards an Identity and Access Management, which not only supports Cloud, Mobile, and Social Computing, but also OT (Operational Technology) security, APIs (Application Programming Interfaces, which apps, services and systems interact with each other through and which need to be protected) and the apps, and the Internet of Things. There will be many sources of trust for various attributes.

Fundamental #4: Multiple Identities – Many users will use different identities (or personas) and flexibly switch between these

There is no 1:1 relationship between persons and their digital identities. A person might have different identities. At a higher abstraction level, a person might be an employee, a freelance contractor, and a customer of the same corporation all at the same time. One person, multiple identities. On a more concrete level, a person might switch from their Facebook account to Google+ to self-registration to a type of account we do not even know yet (trends are changing rapidly on the Internet), but it remains the same customer. Organizations have to understand that it is still the same person – otherwise they will lose the former relationship.

Fundamental #5: Multiple Authenticators – There is no single authenticator that works for all

Simply stated, username and password do not work for wearable computing. More generally, there are so many different types of identities and related elements in future Identity and Access Management, that it becomes just too obvious that there is no common denominator for authentication anymore. Username and password have served (but not well…) for this purpose for decades. Many companies tried to standardize on a specific strong authentication technology to overcome their limits. Now, we have to accept that there is no single approach we can rely on. We will have to support different authentication mechanisms, while understanding the risk and making risk-aware access decisions – see Fundamental #7.

Fundamental #6: Identity Relationships – We must map humans to things, devices, and apps

Things belong to humans or organizations. They might be part of bigger things – just think about the connected vehicle. Humans use devices with apps to access services. The apps act on their behalf. What this means is that there are complex relationships between identities. Future Identity and Access Management must understand and manage these relationships in order to make the right decisions.

Fundamental #7: Context – Identity and Access Risk varies in context

A key concept of Future Identity and Access Management is context. Which device is someone using? Which type of authentication? Where is the device used? There are many elements that make up the context. Depending on that context, risk varies. Identity and Access Management has to become risk-based and, with the ever-changing context, dynamic. While today’s static access controls implicitly reflect a risk understanding in a static context, future access controls and decisions must become dynamic to adapt to the current context.

These Fundamentals help defining the scope, strategy, and roadmap for future Identity and Access Management.

Google+

Read your cloud contract well: Your cloud service might become disruptive to your business

08.12.2014 by Martin Kuppinger

There is a lot of talk about disruptive technology and disruptive innovation – not only in the context of fundamental technology changes, but also in the context of innovating your business by being disruptive.

Cloud Computing has a potential for fostering such innovation in business, for various reasons:

  • It makes IT services available to organizations that never before could afford these services. This is particularly relevant to SMBs.
  • It provides rapid adoption of new services, thus enabling rapid innovation.
  • It allows companies to concentrate on their core business and competitive advantage, instead of baseline IT.
  • It allows forgetting about discussions and historic decisions about IT vendors and platforms, instead concentrating on the service delivered, (more or less) regardless of the underlying platform.

While nothing in this leads to disruptive innovation in business, it helps businesses to become more agile and fosters such innovation. The flexibility Cloud Computing promises (and, in many situations, delivers) helps the business to move away from IT as the naysayers and showstoppers.

However, there is another notion of “disruptive” Cloud Computing can bring to business. It might become disruptive to the business itself. If you have ever read a standard contract of a Cloud Service Provider (CSP) thoroughly (and cloud business is about standard contracts), you have probably seen a number of points in there, which might become challenging to your business.

Look at the parts of the contract that cover topics such as end-of-service, changes to the service, or availability. According to their contracts, many CSPs could go out-of-business at virtually any point in time. They can change their services, typically with short prior notification (if they notify at all). And their guarantees regarding availability might not meet your requirements and expectations.

Furthermore, you will rarely (not to say never) find sections that guarantee upwards compatibility of the APIs (Application Programming Interfaces) provided by the cloud service.

Is this all bad? Not necessarily. To some degree, there are good reasons for these contracts (aside from the potential liability issues). A benefit of Cloud Computing is the flexibility of changing the service rapidly for improved capabilities, but also for improved security. Clearly, a common three-month patch window we observe in many organizations (and in others, far longer or fully undefined) is not sufficient anymore in these days of zero-day attacks. In addition, availability of cloud services is commonly far better than of internal IT services, at a fraction of the cost of implementing high availability.

On the other hand, feature changes might become massively disruptive. They might lead to a huge increase in help desk calls, when users are confronted with a new user interface or some features are somewhere else now. These changes might prevent applications from working at all. They might remove features some customers relied on. The CSP might argue that virtually no one used a particular feature. However, if you are among the 1% who did, it doesn’t help you at all knowing that 99% never used that feature.

When APIs are changed, this can affect integration between cloud services or between a cloud services and your existing on-premise applications. It also, as with any changes, might affect your customizations. The typical argument is that the advantage of cloud services is that they provide a well thought-out standard set of features in areas where you will most likely not gain a competitive edge by customization. I’ve heard this argument in various forms several times. Yes, ideally an organization relies on a standard service. However, to pick a common example: in most services, you must customize. Just think about your own sub-sites and libraries in Microsoft SharePoint on Office 365. Moreover, most business applications, such as CRM in the cloud, ERP in the cloud, or service desk in the cloud, do not exist in isolation from the rest of the business. There is a need for integration.

So what can you do?

On one hand, CSPs should understand these issues. At least the APIs must become upwards compatible. That requires more thinking about the APIs that shall be exposed upfront. It requires better software design. But it is feasible, maybe with the one or other issue when a major upgrade is done. The same holds true for customizations. These must work well.

On the other hand, if the APIs change or customizations might get lost – or when features are discontinued – there must be a notification way ahead, so that customers can prepare for that change.

For customers, the reality of standard cloud contracts means that they must prepare for such unwanted changes. There must be an exit strategy if a cloud service is discontinued or a CSP goes out of business. Customers must think what to do in case of availability issues. And they must do their customization and integration work while keeping in mind that things might change. They must be aware that relying on a cloud service, particularly SaaS (Software as a Service), might become disruptive to their business.

It is not that relying on the Cloud is bad. If customers do a fair comparison of cloud services to their on-premise services, they will find many areas where cloud services score far better. However, not everything in cloud services – and particularly not everything in the very unilateral (in the sense of “unfair”) standard contracts – is good. If this is well understood, customers can benefit from Cloud Computing without disrupting their business.

Google+

Posted in Cloud | No comments
Services
© 2015 Martin Kuppinger, KuppingerCole