31.03.2010 by Martin Kuppinger
In my recent post on versatile authentication I touched the topic of national eID cards. Some two weeks ago, I did a presentation on eID interoperability from a private perspective. I started with the question about why strong authentication technologies are still not widely used. The vendors might claim that they are, but in fact we still mainly rely on weak approaches like username/password, PINs, PIN/TAN, and so on.
One reason for that is that approaches which are reusable need a sponsor. Many companies in eBanking, eCommerce, and other areas understand the need for strong authentication. But they don’t want to rely on proprietary mechanisms. They don’t want to deploy and provide the logistics for advanced mechanisms due to the costs associated with. And they don’t want to invest in a technology for their customers which then might be used by their competitors as well. One example for the latter situation are readers for cash cards, amongst others.
For sure you could argue that the example of the UPU (Universal Postal Union) has demonstrated some 145 years ago, that this isn’t a valid argument. Before UPU, there had been a complex system of billing between postal agencies in different countries. They counted the letters and the fees and billed each other. The basic idea behind UPU was, that there is usually one letter back per letter sent, thus the fees which have to be payed are more or less equal. Thus it is much cheaper to just not do that billing anymore and to have the senders pay only a fee in the originating country of the letter. This system works for a pretty long time right now. And I don’t have that many doubts that a standardized system which requires some hardware to be deployed would work as well when everyone supports his customers – the ones with fewer customers will pay less on average because they have to deploy less, the ones with more customers will pay more.
Unfortunately I neither see a standard solution which is accepted by everyone nor the willigness to do that. Thus we need alternatives. And that is where eID cards come into play. There is a potential for mass adoption at least in countries where it is mandatory to have such a card. However, that requires that these cards can really be used for strong authentication in eCommerce and other areas. And that, again, requires the deployment of readers for these cards.
Thus, we need someone to sponsor at least the initial deployment to build the critical mass. The only ones to do that are the governments, like in Germany, where 1.3 million readers will be sponsored. That in fact is business development, because it enables the use of Internet-based services with strong authentication. It enables new business models, efficiency in organizations, it will reduce fraud and the associated costs. However, the eID projects usually aren’t seen from that perspective of business development – private use cases are more sort of an add-on. Decisions like in the Netherlands to shift such projects to a later point of time show a lack of understanding of the potential economic impact.
We need mass adoption of reusable strong authentication for the “Internet business”. The only way to achieve this is by sponsors who invest in the mass adoption of technologies. And the most likely sponsors are governments, as part of what they do for their economies and their competitive advantage. Once we have a mass adoption of strong authentication, we might see additional technologies being used for graded and step-up authentication. Vendors of versatile authentication and context-based authentication/authorization will benefit from this as well because eID cards will always be only one of many accepted means of authentication. But the ones who benefit most are the businesses themselves which can reduce fraud and implement new business models.
Visit EIC 2010, Cloud 2010, MIS 2010.
25.03.2010 by Martin Kuppinger
There are a lot of talks about making our planet smarter. Despite being far too much fiction, the film “Die Hard 4.0″ has been around some of the potential risks around this. I recently had a very interesting discussion with a forensic/incident expert from the US. We’ve discussed several issues and ended around the idea of this “smarter planet” and the “smart grid” as one of its most prominent elements. Per se, the idea of having a networked infrastructure in many areas, with a high degree of flexibility and increased service availability is as appealing as inevitable – things will go that path.
However the security of that future seems to be somewhat ignored, at least in the public discussion. For sure politicians aren’t interested in the dark site of things as long as the bright side is discussed. They don’t want to be the party poopers. Only if there is an incident, they will claim that they have done everything to avoid it and that everyone else is guilty but not them. Vendors, on the other hand, are mainly interested in driving things forward. Most of the for sure don’t ignore security – but it seems to be more sort of a pain than an opportunity.
Thus, we observe currently the same thing in big like we can see day by day in small: Security is ignored when driving things forward. That is true for a tremendous part of the software which is developed, it is true for new standards in IT (think about web services – security has been missing at the beginning), it is true for so many other areas. And now the same thing seems to happen for all these smart things. But, from my perspective, then these things aren’t really smart.
Just think about the smart grids. This is sort of a massive data retention mechanism, collecting and networking millions of households with the utilities. There are privacy threats – who has used which electric device when? There are new attack surfaces. For sure there are some things going on around security. But from what I observe, security is developing slower than the rest of the things in the smart planet initiatives. It’s sort of a ticking time bomb out there.
What will happen? Security is undervalued. For sure it isn’t ignored but it won’t have the relevance it should have in these projects. People will cheer when there are some results of projects delivered. Security will become a problem. There will be unpleasant discussion about who is guilty or not. Security issues will be patched. To some degree. Wouldn’t it be a better idea to built security into the concepts from scratch? To really have a smarter planet at some point of time?
Sorry for being the party pooper!
17.03.2010 by Martin Kuppinger
There are so many myths out there about Cloud Security – time to start putting them away…
- The cloud is inherently insecure. No, not really. There are providers which deliver a high level of security. The cloud can be more secure than internal IT, given that services are frequently operated very professional.
- The cloud is more secure than the internal IT. No, as well not. The cloud is neither secure or insecure. It is about the single service which might be more or less secure. And it always depends on with what you compare, e.g. how strong security in the existing internal environment really is. Thus, it is important to define security requirements in service descriptions and SLAs and to measure security.
- Cloud Security issues are new. No, most of them are not. They are the same like in outsourcing or the tactical use of external services we are doing for years right now. The difference is that there are much more services to deal with – which is an opportunity to handle security in a standardized way and improve it beyond the typical ad-hoc approaches of the past.
- Security is the task of the Cloud Service Provider. Yes and no. Service providers have to provide a high level of security and they have to inform about. But you can’t just rely on them. You’re always the one who defines his security requirements and is responsible for their fulfillment – by chosing appropriate service providers.
- We can’t do things outside of the EU. A myth. There are some legal aspects around operations on privacy-related data which have to be observed. But overall it’s not about that things can’t be done but more about a big grey area of uncertainty.
- SAML solves the IAM issues in the cloud. No, definitely not true. SAML is the first little step towards the target of externalized security of cloud services. But that’s only about the separation of administration and authentication. The much more interesting topic of authorization (XACML and other standards) has to be solved as well. And few cloud service providers support XACML today. Few support own proprietary web services as an alternative. Not to speak of auditing interfaces…
- Security in the cloud can’t be measured. Somewhat true – in the sense of: Most providers don’t support risk metrics, a detailed auditing and so on. But theoretically not true, because these interfaces can (and should) be provided.
More on Cloud Security and some of the myths and real issues in the KuppingerCole Virtual Conference on Cloud Security. Register for free!
And for sure at Cloud 2010, parallel to EIC 2010.
11.03.2010 by Martin Kuppinger
Versatile authentication is one of the hot topics in IT – more and more vendors start to support it in some way or another. Versatile, a not that common term, means the ability to flexibly switch between different authentication methods. In practice, versatile authentication solutions shall support at least the following features:
- Flexible use of different authentication methods.
- Simple plug-in of additional authentication methods, e.g. extensibility.
- Flexible interfaces for applications OR integration with existing technologies which interface with other apps.
- Support for step-up authentication and other more advanced approaches.
Other aspects like fallback methods, management support for handling the token logistics and so on are value-adds, depending on the implementation of the versatile authentication technology.
Read the rest of this entry »
03.03.2010 by Martin Kuppinger
In these days the industry talks a lot about IT GRC, Risk Management, Access Governance, Identity for the Cloud, and so on. However, we should keep in mind that the vast majority of organizations still have to do a lot of homework around basic Identity and Access Management. And, even more: That’s the foundation for many of the other things like Access Governance, because it’s not only about auditing but as well about managing (and, honestly, it’s much more about managing and enforcing preventive controls than of auditing in a reactive way, isn’t it?).
Thus, you shouldn’t ignore Identity Provisioning, Virtual Directory Services (still one of the most valuable technologies in IAM and one of the best hidden secrets at the same time), or Enterprise SSO. You will find a lot of Podcasts of Webinar recordings at our website. Thus, I won’t analyze everything around that but focus on some few points why we still should consider the core IAM market as relevant:
- Provisioning tools have matured over the past years – and they support many of the “new” features like access certification frequently. Thus you can do a lot of things relying only on these “basic” tools instead of adding too much on top of them. Not all, but a lot. That has to be carefully analyzed but in several cases, one tool definitely is the better solution than multiple tools. That’s like in real life: There are advantages for the multi-tool, there are advantages for the specialized tools.
- If you look at the market, than there are relatively few really big organizations. Most of them have some IAM. But, correctly, most of them have more than one IAM approach and implementation. Thus, they have integration issues which is an important market, with many architectural options to solve this. And, beyond that, in these large organizations you frequently can observe a tendendy to implement some point solutions in some areas – for example an additional provisioning tool for some specific systems. Given that, there is still a lot of work to do and a lot of potential, for example in providing the provisioning tool which integrates other provisioning tools.
- The medium-sized businesses frequently don’t have much provisioning and other IAM solutions in place. Thus, there is a huge market opportunity, as well for on-premise as cloud-based solutions.
- Some implementations might be worth a review with respect to today’s requirements and solutions. There is always room for updates and even replacements.
The reason why there is somewhat fewer attention of the marketing departments of vendors on that segment (at list when looking at some vendors which have not only provisioning) is simple: Provisioning is hard to sell. E-SSO is easier to sell. Access Governance might be even easier than that. Thus, looking at the low-hanging fruits instead of focusing on products with a long sales-cycle and a lot of competition, appears to be logical from a sales perspective. However, that leaves a large portion of the market blank and it doesn’t fill the pipeline sufficiently for a time where the low-hanging fruits might have been picked.
It’s not up to me to judge about vendor marketing and sales strategies. But it is interesting to observe what is happening in the market. And that might be one reason for the relative success of several of the smaller vendors in many markets (by the way: some large vendors are very active in the “classical” segments – innovative, focused,…).
From a customer perspective, the buzz and fuzz around the new topics might divert the focus from the things which have to be done as a foundation, on which other things can be built. Thus customers always should keep in mind that they can’t be successful without doing their homework. And that includes to provide a solid foundation for provisioning – with an adequate architecture for the customer’s requirements. I’ll blog about these architectures soon but you might as well look here - I’ve touched the topic in this webinar.
Don’t miss the European Identity Conference 2010 and its Best Practice presentations to learn more about this. See you in Munich, May 4th to 7th.
03.03.2010 by Martin Kuppinger
Yesterday, the German Federal Constitutional Court declared the German law on “Vorratsdatenspeicherung” for illegal. That wasn’t a real surprise, given that this is overall well aligned to other decisions of the Federal Constitutional Court. Two interesting annotations: There where some 35.000 suitors against this law. And the German Minister of Justice, Sabine Leutheusser-Schnarrenberger, was amongst them. She started the law suit when being in opposition – right now she had the interesting situation that there was a lawsuit by her against Germany, represented by her – so she would have been a winner in that case anyway.
The law on “Vorratsdatenspeicherung” (a nice term, isn’t it, as long as the name of the Minister of Justice) is about the collection of data at ISPs and other types of service providers – about connection logs in internet and telephony services. They had to be kept for six months to allow investigations. The law has been formulated based on an EU guideline, but exceeded the minimum requirements of that guideline. The fact that this law has been declared illegal might affect as well the EU guidelines because they are critizised not only in Germany but in other countries as well, it probably will affect other instances of massive and undifferentiated data collection of the German state.
The Federal Constitutional Court doesn’t forbid the collection of information. However, the current law didn’t fulfill the requirements of data security, didn’t comply with some other laws (like the protection of preachers, doctors,… and their confidentiality requirements), and didn’t restrict the use of the information sufficiently. Interestingly, the Federal Constitutional Court also decided that the information has to be deleted immediately (or at least as fast as possible), thus the decision goes beyond other decisions which allowed the government to first improve the law, without changing the status quo.
After the decision of the Federal Constitutional Court had been unveiled the discussions about the next steps started immediately – and that’s where IPv6 comes into play. Within its decision, the Federal Constitutional Court declared that connection data of churches, some governmental organizations, and other specified parties must not be stored. That led to the argument of the lobbyists of the “internet economy” (e.g. ISPs and so on) that this can’t be implemented. Given that IP addresses are usually assigned dynamically it wouldn’t be feasible to exclude some groups. But, honestly, that isn’t true. It is true as long as you rely on IPv4 and dynamic IP addresses (and given that they are limited, we have to). But it isn’t true with IPv6. With other words: When relying on IPv6, you can comply with the decision of the German High Court. Given that the technology supporting IPv6 is out in most areas – client operating systems, servers,… – at least in most cases, the answer is simple: Finally switch to IPv6 as the standard protocol and you’re done. Overall, we’ve been waiting way to long for IPv6 becoming the primary protocol and IPv4 being used only for backwards compatibility. This decision, with its impact on the entire European legislations in that field, thus might become a push towards IPv6.