01.05.2013 by Martin Kuppinger
Last week, Google announced that it has joined the FIDO Alliance. FIDO stands for Fast Identity Online. The alliance was formed in July 2012. The mission is to change the nature of online authentication by providing interoperability among strong authentication devices. The alliance is working on specifications for an open, scalable, interoperable set of mechanisms that allow secure authentication of users to online services without the need for passwords for each of these services. It wishes to become a standard that allows using both existing and future strong authentication devices (those that support the FIDO standard), in an interoperable way.
This is in fact about “versatile authentication” from scratch, enabled in any device. Currently, many organizations are investing in versatile authentication technology that allows them to flexibly change and combine different authentication mechanisms. With FIDO, that could become a standard.
Users can use a choice of different mechanisms for strong authentication, including hardware tokens, embedded hardware such as TPMs (Trusted Platform Modules), biometrics, etc. The website will recognize the devices as “FIDO devices” and enable them. Once a strong authentication device is connected to a site, it can be used the same way it has always been used.
FIDO requires a browser plugin, which is the simple part of the story. It also requires a device-specific module that must be installed to use the “FIDO authenticator”, i.e. the strong authentication device of choice. The website or online service must also support FIDO.
Success of FIDO will depend on two factors. There must be a critical mass of online services supporting FIDO. Given that several large service providers already are members of the FIDO alliance, that might happen. Secondly, there is the need for a critical mass of users that use strong authentication devices with FIDO support. The challenge in that area will be a simple enablement of FIDO through browser-plugins (even better if they are pre-installed) and especially the availability and simple deployment of device-specific modules.
On the other hand there clearly is the question of whether FIDO will gain sufficient support and acceptance amongst the vendors. What will the vendors of strong authentication devices do? What will the vendors of versatile authentication platforms do? And what will the providers of online authentication services do?
From my perspective FIDO could help all of them. It provides the opportunity for “strong authentication for the masses”, for a ubiquitous approach that works for everyone, with flexible choice of strong authentication devices. The providers of Versatile Authentication Platforms can still provide the server-side interfaces, but with more flexibility in supporting different devices. And providers of online authentication services can still act as brokers and service providers – for many online services that will remain the better choice than direct support for FIDO. There might even be services that are brokers for “non-FIDO clients” and act as FIDO clients.
Overall, there is a good potential for the FIDO Alliance, despite the fact that it requires the installation of a client component. I greatly appreciate everything that makes the Internet more secure. I will closely watch the progress of the FIDO Alliance. However, I have seen so many concepts in that area that I would not bet on their success.
28.01.2013 by Martin Kuppinger
Recently a story about Google hit the news, according to an article in Wired, “Google declares war on the password”. Google wants to integrate this into the browser. Their approach is based on the idea of using a USB key or a NFC (Near Field Communication) device to log into applications. Currently, Google uses a YubiKey, developed by Yubico.
This brought my attention back to Yubico. Some months ago, I had a conversation with their CEO Stina Ehrensvärd. She unveiled some of the new devices Yubico is working on, including their YubiKey NEO, which supports both NFC and USB, and their YubiKey Nano, which is so small that it is designed to be put into a USB port and to remain there. There are other YubiKeys out there as well, but these two are the most interesting ones.
In contrast to other vendors, Yubico focuses on a “lightweight” approach with fairly cheap devices and little overhead. They also deliver free and open source software for the backend side, but mainly rely on partners. Customers can simply buy a YubiKey online, download the free software or turn to an enterprise software partner supporting Yubikey, including Quest Software, Duo Security, and Digital Persona. A growing number of consumers are also using YubiKey with password managers, including Password Safe, Passpack and LastPass. Adding Google to the list of partners would obviously be a very big deal for Yubico.
A more interesting question however is a simple one: is this approach good enough to really replace passwords? When we look at the authentication space, there are three factors: Knowledge, Ownership, and Biometrics. Quite some time ago, I wrote a report providing a market overview on conceptual approaches for strong authentication together with my two colleagues Prof. Dr. Sachar Paulus and Sebastian Rohr. This provides an in-depth analysis of strengths and weaknesses of different approaches.
First, having only a token will not be sufficient. That would lead to a one-factor authentication and thus wouldn’t be sufficient. When working with a username and password, there are at least two factors, both based on knowledge. You might argue however that an e-mail address as a means doesn’t count, given that this typically is public information.
Two-factor authentication also is not secure by design – there has for instance been a recent incident in online banking where both factors were attacked successfully.
Therefore, I don’t see the future in having just a device like the YubiKey. Everyone who has access to that device then could log on using it, if no additional factor is used. However, the combination of such a device together with a password delivers real two-factor authentication. When we asked Ehrensvärd about this approach, she clarified that though it was not highlighted by Google in the IEEE white paper that Wired has reviewed, the Yubico approach and vision is to always combine a YubiKey with at least a simple PIN or password.
One problem will remain in any case: your password is in your brain and available everywhere and anytime (unless you forget it). A token has to be carried around. In these days where most of us use multiple devices, this can become rather inconvenient. If we leave the token in the device, we opt for a rather insecure approach – everyone who has access to the device has access to the token as well. But carrying it around is not the choice of users, even with very small form factors like the ones provided by Yubico. NFC solves some of the problems, because such a device can be used for multiple systems, but you still have to remember to carry it.
I personally would prefer a credit card size form factor for a NFC device plus the choice between password and OTP (one time password), sent out-of-band to my cell phone, as the second factor. My wallet and the cell phone are the two assets I typically carry around. When discussing this with Ehrensvärd, she answered that many YubiKey NEO customers place the NFC-enabled Yubikey NEO in their wallet, and then tap the whole wallet to a smart phone or NFC laptop to login.
Despite the fact that the card form factor is still lacking from Yubico, their approach is quite interesting – and they might get a big push from Google in future. However, if you look for affordable approaches for stronger authentication, you should have a look at Yubico today. Even while it is not the perfect solution for the stronger authentication challenge, Yubico provides an interesting alternative.
23.03.2011 by Martin Kuppinger
I’ve blogged last week about the RSA SecurID case. In the meantime there were several other posts and advices on that and I’d like to put together some thoughts from my side about that, looking at what customers should do now.
What should existing customers do short-term?
In most cases, RSA SecurID will be a standard mechanism for strong authentication which can’t be replaced immediately. If customers don’t use a solution for versatile authentication they usually aren’t able to opt for another (stronger) authentication mechanisms on the fly. Not using RSA SecurID however will make things even worse, because that would mean to step back to one factor with one or two means for authentication. Thus it is about staying with RSA SecurID and deciding about which additional actions to take – “compensatory controls”, e.g. increased auditing, additional fraud detection technologies, and so on.
Customers who have a versatile authentication approach in place might evaluate whether they can replace RSA SecurID with another factor – which then would be, for time and logistics reasons, an approach not depending on hardware. However doing that will be somewhat complex (helpdesk calls, technical aspects,…). Thus customers should first check whether the increased risk of using RSA SecurID is acceptable or not. Instead of replacing the option of adding another factor/means for interactions and transactions with high risk appears to be most appropriate. Besides this, the actions mentioned abovr in auditing have to be implemented.
What should existing customers do mid-term?
Replacing a technology like RSA SecurID is quite expensive. Given that RSA will harden its own systems and seeds can be changed over time, the threat will decrease. However, as mentioned in my last post, RSA SecurID never will be the same again. The mid-term answer, from my perspective, is versatility. Having more options for quickly changing to other and additional factors and means for authentication is the most promising approach. Thus, RSA SecurID is just one of multiple approaches.
For high risk environments, biometrics might come into play again (if not used yet). In addition there are some approaches of two-factor authentication which don’t rely on seeds and secrete algorithms. However they aren’t necessarily absolutely secure (if anything could be absolutely secure), thus customers should carefully evaluate whether other approaches provide real advantages above the established RSA SecurID approach. The same level of mistrust should be used for all types of authentication.
What should potential buyers do?
It is about re-evaluating the strategy for authentication. Versatility is key – and the strategies need to be re-thought if they are not focused on a versatile approach allowing different types of authentication mechanisms to be used and exchanged flexibly. Regarding RSA SecurID, the risk has to be rated again and decisions about whether the approach is sufficient for the interactions and transactions which have to protected have to be reviewed. From my perspective it is not that much about not using RSA SecurID (depending on what RSA does to increase security again, for sure – but I assume they will do a lot) but to carefully analyze the level of protection provided and weigh this against the risks of authentication fraud for what has to be protected. When deciding to use RSA SecurID appropriate controls have to be implemented – but that is true for any other authentication mechanism as well.
By the way: Regardless of the RSA SecurID approach, any authentication strategy which doesn’t focus on versatility, risk-based authentication/authorization and context-based authentícation/authorization should be re-thought.
Some general thoughts:
RSA has had a very strong image for their RSA SecurID approach – and it worked for many years. However there are two fundamental issues:
- Centralized seeds
- Confidential algorithm
Both are risks of that mechanism. Thus security is obviously limited. Regardless of which approach you use, thinking about the potential weaknesses (social phishing; central stores which might become target of attackers;…) is important. Unfortunately, security comes at a price, because there aren’t simple, cheap, easy-to-use approaches without logistics cost and other shortcomings which provide perfect security.
Again, like mentioned in my last post, we will discuss things like versatile authentication and the RSA SecurID incident at the EIC 2011. You shouldn’t miss that event.
18.03.2011 by Martin Kuppinger
Yesterday RSA informed about a security breach which they assume to be an “advanced persistent threat”, e.g. a long-running attack. In that case it was apparently against the seeds, e.g. the base material for keys which are used to generate OTPs. With other words: The first and (until now) stronger part of the two-factor authentication used with RSA SecurID cards isn’t secure anymore. In fact it shows that every approach which relies on a central database of such seeds has its inherent security weaknesses.
The importance of this breach becomes obvious when looking at the SEC 8-K filing – it is important enough that it required an official note to the stock markets.
The bad thing in that case is that its not a single RSA SecurID card which is compromised, which always would be the case with stolen or lost cards – it are potentially all RSA SecurID cards, because currently it is neither known which seeds are compromised nor by whom. RSA has (under the link above) provided hints for its customers on what to do now. They claim that there is no immediate impact for existing customers. But when looking at the actions recommended by RSA it becomes obvious that customers can’t do much – besides fundamental changes, e.g. RSA SecurID implementations based on new seeds (with all the consequences of such a change) or changes to other/additional authentication mechanisms.
Our recommendation for customers which have RSA SecurID cards implemented is to first carefully analyze the situation and their specific risks – e.g. which type of information is at risk if the RSA SecurID based authentication is not only at risk (like now) but an attack actually takes place? Based on that it is about to decide whether immediate actions have to take place (besides informing the users to be even more careful regarding phishing mails and so on, because the authentication strength no relies mainly on the PIN). These actions might range from increased threat analysis and forensics to adding other authentication technologies.
Beyond that, customers should take the opportunity and rethink their authentication strategies in general. How could they implement a flexible, versatile approach for all types of users and different security requirements of interactions and transactions? Versatility definitely helps – having a versatile approach for authentication in place allows to add additional factors and means or replace some factors quickly – which isn’t the case when only one specific mechanism (like RSA SecurID) is used.
What does it mean for RSA? In fact, the “trusted name in security” definitely will suffer – trust takes a long time to build and a very short time to loose. However, RSA isn’t that heavily depending on the success of RSA SecurID anymore. The company has broadend its portfolio. RSA SecurID is sort of a “cash cow”, but there are other approaches including versatile authentication solutions available. And, honestly: If you use RSA SecurID tokens as one element within a versatile authentication strategy, there is a value in that approach – especially given that RSA will do everything to ensure that such a breach never happens again (but there is always some risk of things happening which are unlikely). But it is only one approach of many, and the best thing customers can do is to become more flexible, more versatile.
Interestingly this fits well to research we are currently working on. One part of this will be a research note comparing the strength and usability of different authentication mechanisms, the other has a working title of “why hardware-based OTP is dead” – the latter not mainly because of the potential security risks, but because of the limited usability and the cost of logistics. Thus, have a look at our upcoming research. The topic will as well be discussed at the EIC 2011 (European Identity Conference), May 10th to 13th. Munich.
04.03.2011 by Martin Kuppinger
The news about a significant number of malicious apps for the Android platform on mobile phones hit the news yesterday. Many comments still sounded a little surprised. However there is no reason for being surprised. Today’s mobile phones are insecure by design. The vendors haven’t understood that security is mandatory for long term success and they are still selling devices which are as secure as a PC in the mid ’80s of last century. Unfortunately these devices are connected and have far more capabilities than the PCs of the early days.
The vendors (and developers of OSes) are just ignoring the need for built-in security. A PIN code is a ridiculous mechanism to protect a device which can hold that much sensitive data and which can be used to access sensitive corporate information. How about biometrics or other types of strong authentication? There are many potential solutions out there for mobile devices which are secure by design and still user-friendly.
In addition to the insecure devices and OSes, the concept of apps itself is insecure. How to manage apps for your corporate users? How to do DLP (Data Leakage Prevention) for apps? The concept of apps is as well insecure by design. Unfortunately, it is a good example for the wrong design principle “function follows form” – it should be “form follows function”. But the concept of apps is about markets and money, about a “cool” concept and not well-thought, because it isn’t secure (enough).
For organizations, the only consequence can be to review the policies for using mobile devices and massively restrict the professional use of devices which are insecure and have too many capabilities. That requires an analysis of which platforms are allowed for which use cases. You might argue that this won’t work because even the managers want to use their gadgets. Correct, it isn’t a simple task to do. However, in virtually every country there are laws which require that the board enforces an adequate risk management. Using insecure gadgets with access to sensitive corporate information (starting with eMail) is a risk which has to be mitigated by restricting the use of gadgets or more secure ways to use them. By not doing so (or even using insecure devices as a board member), legal requirements are ignored. I’d bet that the next hot topic for auditors will become mobile security…
For vendors, these new attacks hopefully are an alert which helps them to understand that security is a key requirement for long term success in the market. That might lead to invest more in security which is easy to use.
In the meantime we will see masses of point solutions and services to better protect mobile communication. Be careful with that – some might deliver a real value, others will turn out to be sort of placebos. But in any case, you first should have a strategy and policies for the secure use of mobile devices, before you invest in such point solutions and services.
It will be interesting to observe what happens in the next months. Will vendors wake up? Or will it need more and even more severe incidents for that?
20.01.2011 by Martin Kuppinger
SAP recently announced that the they will buy most technology assets from the Swiss-German security specialist SECUDE. The developers and other resources will as well move to SAP, ensuring that as well the software as the “brain”ware is available to SAP. SECUDE provides solutions around SAP for strong authentication, single sign-on, and event management specifically to SAP environments. There is a long-term relationship between both companies, SECUDE being a supplier for many SAP customers in the areas mentioned.
One might argue that this acquisition isn’t a real big deal, compared to BusinessObjects or others. However, it is a strategically important one. SAP will deliver the core functionality of the SECUDE SecureLogin product as standard feature, thus the first time providing front-end security. Overall, the deal appears to be part of a strategic shift towards more “security out-of-the-box” at SAP. Other vendors like Microsoft (out-of-the-box), Oracle or IBM (separate products) have heavily invested in security products in the recent years, in many cases through acquisitions. In the press release SAP mentions that customers demanded additional security functionality. And that is what is really interesting with that deal: Security can’t be left to third parties, vendors have to provide solutions by themselves. Security isn’t the core business (in most cases), but the core business requires security.
However, there will still be enough room for third parties, as long as they focus on security for heterogeneous environments or the niches left by the big players. But providing comprehensive security features is increasingly a must for software vendors in the non-security business. When looking at many products out there, there is still a long way to go to provide security out-of-the-box at an acceptable level. By acquiring SECUDE, SAP has made a significant step forward towards this.
14.10.2010 by Martin Kuppinger
I’m somewhat reluctant regarding biometrics. There are some good reasons that biometrics still are a niche approach: The need for specialized hardware, the aversion of users against some biometric approaches like fingerprints, the discussion about potential security weaknesses for example around fingerprints, the intrusiveness to the user experience, and more…
However, there is one approach I find interesting: Keystroke Biometrics. The German vendor Psylock provides several solutions based on what they call keystroke biometrics. The user has to train the system a little. I had to enter 11 sentences, which took me less than 2 minutes. OK, I’m typing pretty fast, but it probably never will take more than 3-4 minutes to train the system. To authenticate, a sentence has to be entered. The system analyzes the way a user types in the sentence and compares it to the stored values. I’ve tried to change my way of typing a little (slower, with breaks,…) – and wasn’t identified. When I typed as usual, I was always identified successfully.
For sure there will be some more false negatives/false positives depending on the configuration. But overall, it is a simple approach. It is based on the rhythm of typing which appears to be unique. And: You don’t need special hardware, because every user has a keyboard. At least if you don’t use an iPad or another tablet. And even there you might use that technology because you can type with your fingers on the screen. However, that would mean to have two identities – for the tablet and for a system with a real keyboard.
From my perspective, this approach is interesting to either add another factor to authentication or to use it for password resets instead of questions and other approaches. It is simple to use and to implement. From my perspective it is one of the most appealing approaches in biometrics, because it is easy to use, requires no additional hardware, and it is intuitive.
EIC 2011 and Cloud 2011 – Munich, 10-13 May 2011
31.03.2010 by Martin Kuppinger
In my recent post on versatile authentication I touched the topic of national eID cards. Some two weeks ago, I did a presentation on eID interoperability from a private perspective. I started with the question about why strong authentication technologies are still not widely used. The vendors might claim that they are, but in fact we still mainly rely on weak approaches like username/password, PINs, PIN/TAN, and so on.
One reason for that is that approaches which are reusable need a sponsor. Many companies in eBanking, eCommerce, and other areas understand the need for strong authentication. But they don’t want to rely on proprietary mechanisms. They don’t want to deploy and provide the logistics for advanced mechanisms due to the costs associated with. And they don’t want to invest in a technology for their customers which then might be used by their competitors as well. One example for the latter situation are readers for cash cards, amongst others.
For sure you could argue that the example of the UPU (Universal Postal Union) has demonstrated some 145 years ago, that this isn’t a valid argument. Before UPU, there had been a complex system of billing between postal agencies in different countries. They counted the letters and the fees and billed each other. The basic idea behind UPU was, that there is usually one letter back per letter sent, thus the fees which have to be payed are more or less equal. Thus it is much cheaper to just not do that billing anymore and to have the senders pay only a fee in the originating country of the letter. This system works for a pretty long time right now. And I don’t have that many doubts that a standardized system which requires some hardware to be deployed would work as well when everyone supports his customers – the ones with fewer customers will pay less on average because they have to deploy less, the ones with more customers will pay more.
Unfortunately I neither see a standard solution which is accepted by everyone nor the willigness to do that. Thus we need alternatives. And that is where eID cards come into play. There is a potential for mass adoption at least in countries where it is mandatory to have such a card. However, that requires that these cards can really be used for strong authentication in eCommerce and other areas. And that, again, requires the deployment of readers for these cards.
Thus, we need someone to sponsor at least the initial deployment to build the critical mass. The only ones to do that are the governments, like in Germany, where 1.3 million readers will be sponsored. That in fact is business development, because it enables the use of Internet-based services with strong authentication. It enables new business models, efficiency in organizations, it will reduce fraud and the associated costs. However, the eID projects usually aren’t seen from that perspective of business development – private use cases are more sort of an add-on. Decisions like in the Netherlands to shift such projects to a later point of time show a lack of understanding of the potential economic impact.
We need mass adoption of reusable strong authentication for the “Internet business”. The only way to achieve this is by sponsors who invest in the mass adoption of technologies. And the most likely sponsors are governments, as part of what they do for their economies and their competitive advantage. Once we have a mass adoption of strong authentication, we might see additional technologies being used for graded and step-up authentication. Vendors of versatile authentication and context-based authentication/authorization will benefit from this as well because eID cards will always be only one of many accepted means of authentication. But the ones who benefit most are the businesses themselves which can reduce fraud and implement new business models.
Visit EIC 2010, Cloud 2010, MIS 2010.
11.03.2010 by Martin Kuppinger
Versatile authentication is one of the hot topics in IT – more and more vendors start to support it in some way or another. Versatile, a not that common term, means the ability to flexibly switch between different authentication methods. In practice, versatile authentication solutions shall support at least the following features:
- Flexible use of different authentication methods.
- Simple plug-in of additional authentication methods, e.g. extensibility.
- Flexible interfaces for applications OR integration with existing technologies which interface with other apps.
- Support for step-up authentication and other more advanced approaches.
Other aspects like fallback methods, management support for handling the token logistics and so on are value-adds, depending on the implementation of the versatile authentication technology.
Read the rest of this entry »
10.02.2010 by Martin Kuppinger
My colleague Jörg Resch recently blogged a lot about approaches for “lightweight” authentication and the risks associated with them. There are many companies out there with new or claimed-to-be-new approaches on more or less strong and more or less valid authentication. Whether that’s the approach of isec, of GrIDsure, of Yubikey or one of the many other vendors out there, I doubt that there is the holy grail of authentication amongst. Some of them are definitely interesting, some of them not. Many of them are interesting as one element in an authentication strategy – like GrIDsure, which is OEMed by other vendors as part of their solutions. There is no doubt that many of these solutions can provide value in specific use cases – Multifactor Corp. provides something for and from the cloud, Yubikey is lightweight, GrIDsure as well. There are other approaches where I doubt that they really provide the required usability. I’m not a friend of approaches where you have to recognize pictures or faces, but they appear to have their market as well.
However, what’s really important around all these approaches for strong authentication are two other aspects:
- How do they integrate and work together?
- Are they adequate to protect the transactions and interactions within a specific use case?
My point is: It is not about choosing the authentication mechanism but it is about choosing the best mix of few mechanisms, depending on your use cases. That requires an authentication (and authorization) strategy. That requires platforms for versatile authentication like the ones offered by vendors like ActivIdentity, Entrust, Oracle, and others. That requires a clear understanding of the risk and thus the security requirements of different use cases. Than it is about choosing the appropriate mechanism or a mix of them, to use step-up authentication if required and so on.
The biggest risk is that authentication is either not usable or to simple. That might happen when relying on a single mechanism. By mixing several ones, things become muh easier.
To learn more about that, you definitely should visit the European Identity Conference in Munich, May 4th to 7th. And there will be a market overview on the strong authentication market by KuppingerCole within the next few days – have a look at www.kuppingercole.com/reports.