16.07.2013 by Dave Kearns
At last week’s fourth annual Cloud Identity Summit (founded and curated by Ping Identity) people were still buzzing about the hornets’ nest we had stirred up a year earlier at the third summit when we baldly proclaimed “SAML is dead”.
SAML, the Security Assertion Markup Language, is part and parcel of the Ping Identity federation products. For the last twelve months I’ve been inundated with examples (many from Ping employees) of how SAML is still being implemented today.
Of course, as I noted at the time, the presentation was called “The Future of Authentication” and the context of the bald statement was: “SAML is dead does not mean SAML is bad. SAML is dead does not mean SAML isn’t useful. SAML is dead means SAML is not the future.”
So in that context it was very interesting to hear what Ping Chief Technology Officer Patrick Harding had to say in his keynote to this year’s Summit. He told his audience that what is needed is “a modernized identity protocol stack that is baked into every application that scales to Internet proportions, and hides its complexity from developers and end-users.” He went on to say that the foundation for this “identity stack” is shaped by a trio of emerging protocols — OAuth, OpenID Connect (OIDC) and the System for Cross-Domain Identity Management (SCIM).
You’ll note the decided absence of SAML.
In fact, in the session a year ago we essentially said the same thing: OpenID Connect and OAuth 2 were the foundation for the future of authentication and authorization.
Harding also emphasized that in the coming API economy (another favorite of the analysts at KuppingerCole) it would not be about identity-enabled APIs with access tokens in their requests, but specific APIs for identity. The key, according to Patrick, is simplifying everything as much as possible. “It is automation for developers, for end-users, we have to eliminate all the friction here,” he said. “Developers should not have to know how OIDC and SCIM work.” Instead, they should simply call an API just as they do for other services within their applications.
As we stated when we awarded the OpenID Foundation a 2012 European Identity Award:
“OpenID Connect is a simple JSON/REST-based interoperable identity protocol built on top of the OAuth 2.0 family of specifications. Its design philosophy is ‘make simple things simple and make complicated things possible’.
While OAuth 2.0 is a generic access authorization delegation protocol, thus enabling the transfer of arbitrary data, it does not define ways to authenticate users or communicate information about them. OpenID Connect provides a secure, flexible, and interoperable identity layer on top of OAuth 2.0 so that digital identities can be easily used across sites and applications.”
OpenID Connect allows a user to authenticate to an App, a service or a site (generically termed a Relying Party, or RP) using an identity establish with another system, called the Identity Provider (IdP). Well known IdP’s include Google and Facebook.
While it’s true that wider implementation of the “identity layer” Harding outlined in his talk would be greatly beneficial to users by vastly reducing the number of login ceremonies they’d be presented with (and, quite likely, the number of passwords to remember) there is an unspoken problem – the “elephant in the room” as it were. That is that the user must, at some point, authenticate to the Identity Provider!
Today, that means logging in to Google or Facebook or other IdP by providing, you guessed it, an account name and a password. Some IdPs (like Google) have provided for the ability to use two factor authentication (2FA) but it is optional and – from a security perspective – not much safer than a password alone.
The communications industries speak of “the last mile” (or last kilometer) as the most difficult part of their infrastructure – the connection from the backbone network to the individual subscriber. It is, typically, the slowest, most difficult – and most expensive on a per user basis – part of the network. That login to the IdP is the “last mile” for the identity stack that Mr. Harding (and Craig Burton before him) was talking about as our necessary future.
What we need, really, is an adaptive, dynamic, risk-based authentication and authorization system, based on context derived via open APIs, to correctly and properly authenticate those seeking access and determine the proper level of authorization for them.
If you are at all interested in this, you’ll want to join me for two upcoming webinars.
Next week, I’ll be discussing in greater depth “the Future of Authentication and Authorization,” and expanding on the thoughts I’ve laid out here. Then, in September, we’ll go into detail about how to bring about that future when I present “Authorization as a Calculated Risk.” (link to follow).
The authentication train is leaving the station, headed to the future. You really want to be on board.
02.07.2013 by Dave Kearns
I recently saw a query from a major international consumer goods company headquartered in Europe asking “…we have a Privacy Officer, but we are also looking into the possibility for an Identity Officer, someone who is knowledgeable about SSO, identity and so forth. Unfortunately we are fairly new to the entire identity sphere.” It was an interesting question, so I started a dialog with my colleagues at KuppingerCole to see if we could come to some agreement about the need for such a position.
Martin Kuppinger voiced the thought many of us had when he said “Yet another ‘Chief whatever Officer’?” And it’s true the list of CxOs is beginning to read like the list of vice presidents of a US bank (a quick search on LinkedIn for those holding the current title “senior vice president” associated with Bank of America turns up 4706 entries!)
Since the questioner mentioned that they had a “privacy officer”, I searched to see if this was, in fact, a CPO. But it’s not. Questions about privacy are to be sent to the world headquarters “attn: Privacy Officer” or to any of their national HQs with the same notation. To me that means there’s someone in each office who is tasked with tracking privacy issues. Could an “identity officer” do the same? Since it doesn’t appear that the company wants to create yet another CxO, I decided to explore the possibilities of a privacy officer further.
A non-exhaustive search on Google brought up only one company with a person having the title “identity officer.” I did ignore the few references to people who were in marketing and were charged with creating or fostering “brand identity”: that is, the way customers/clients/consumers view the company. Palo Alto’s IdentityMind announced a Chief Identity Officer early last year. As it turns out, though, Dr. Taher Elgamal (the newly named CIdO) was to be more outward looking than to be concerned with internal identity issues. He’s quoted in the announcement as saying: “With IdentityMind we have developed a technology that allows us to establish the identity in the context of an Internet payment transaction with high degree of certainty. In this new role I now have the ability to focus on working with the Industry in expanding this definition and bridging the gap between users and their Internet identities.” So, still not what the gentleman who asked the question was looking for.
Further searching turned up a couple of postings/musings by people I follow in the Identity arena: Matt Flynn (currently with Oracle, previously with MaXware, RSA, Netvision and StealthBITS) and Matt Pollicove (currently with Commercium Technology specializing in SAP Security, previously with MaXware, Mycroft and Secude). While were initially writing in terms of a Chief Identity Officer, their thoughts are still valid for a lower level Identity Officer – even if they both wrote about this 7 years ago!
Unfortunately, no one seems to have taken up this conversation, nor done anything about instituting such a position since then. Maybe the time wasn’t ripe. Maybe now is the right time.
Pollicove, who started the conversation, stated he was doing it because “I am constantly thinking about how to make Identity Management a larger part of the enterprise, not only because it makes sense from [a] security and compliance [perspective], but because good, clean, organized IdM data results in a better running organization.” In other words, he thought IdM needed more visibility in the enterprise and that a CIdO would go a long way towards gaining this. Now I think we can agree that the visibility of Identity has risen, and risen considerably, within the enterprise without having a CIdO, but it’s generally risen for negative reasons (data breaches, cyber-snooping, etc.) rather than for, as Matt hoped, “a better running organization.”
Matt Flynn went a little further: “I agree with Matt that one owner would certainly make IdM projects easier to manage, but that’s not the greatest benefit.” He elaborated: “I guess my vision would include a Director of Identity that reports to the [Chief Information Officer] or equivalent. She would be responsible for compliance, attestation requirements, establishing Identity policies, ownership of IdM solutions, backup and recovery solutions for identity-enabled applications, etc.” To which I’d also add access control.
Flynn went on to say, “A director of IdM … would need to find solutions that enable the business, facilitate ease-of-use and also maintain strict security guidelines. IdM solutions span the enterprise and the design, architecture and management thereof ought to be central.”
In your organization, especially if it’s a large one, is there a single person – or office – that’s responsible for everything touching identity throughout the entire enterprise? Does anyone coordinate policy on identity for employees, contractors, volunteers, clients, vendors, partners, etc.? If not, why not? If not – wouldn’t you operate more efficiently if there was one? More importantly, wouldn’t your organization be more secure with fewer cracks for the malefactors to slip through?
Would the director of Identity be part of the IT organization, or be on the business side? KuppingerCole’s white paper, “The Future of IT Organizations” will give you some help in making that decision.
I’m not saying you need a Chief Identity Officer, but perhaps a Director of Identity (or Identity Systems) within the office of the Chief Privacy Officer (or the CIO) might be the answer. Let me have your thoughts on this, either in the comments section, email to dk AT kuppingercole DOT com, or tweets to @dak3. If there’s enough interest, I’ll continue the conversation.
By the way, if you missed EIC last month, or missed my presentation on the Future of Authentication and Authorization, I’ll be webcasting an updated version in just a few weeks. It’s free but, of course, space is limited. Sign up now.
18.06.2013 by Dave Kearns
One of my favorite passages from Lewis Carroll is the dialog in “Through the Looking Glass” between Alice and Humpty Dumpty:
“There’s glory for you!”
“I don’t know what you mean by ‘glory,’ ” Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t — till I tell you. I meant ‘there’s a nice knock-down argument for you!’ ”
“But ‘glory’ doesn’t mean ‘a nice knock-down argument,’ ” Alice objected.
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master — that’s all.”
It often seems exactly like that when we have technical discussions.
I first became aware of this when the areas of Identity and Security began to seriously overlap late last century. Around 2005 a group, the “Identity Gang” coalesced around the idea of having informal discussions about identity issues either before, during, or after conferences (Catalyst, Digital ID World, EIC, etc.). What we all quickly discovered was that we didn’t agree on the meanings of terms. So we launched the Lexicon project. We didn’t get very far.
When I joined the Identity Ecosystem Steering Group (IdESG) last year, the same issue came up almost immediately – we didn’t all agree on the meaning of terms. A project was started to create a taxonomy for the IdESG which, incredibly, ended up containing 785 terms! Some had only one definition listed, but others had 2, 3, 4 – up to 13 different definitions. Needless to say that this is still an ongoing project with no end in sight.
I bring this up because of a Twitter conversation I was having yesterday. While Twitter isn’t ideal for dialog among more than 3 people (there were, at times, five involved in this discussion) it does have an immediacy that other methods (chat forums, email, et al) don’t. The drawback, of course, is the 140 character limitation per tweet, which leads to lots of abbreviations, elided letters and texting shorthand – none of which is helpful for understanding, especially among people who normally don’t converse with each other.
What happened was that someone referred to “PII” which I understand as Personally Identifiable Information. Others, though, consider it an abbreviation for Personal Identifying Information. There’s a subtle difference.
Personally Identifiable Information (I’ll call this PII1) is information, either a single attribute or a combination of attributes, which can uniquely identify an individual in a given context or namespace. Your date of birth does not uniquely identify you, but in combination with your mother’s maiden name and place of birth certainly can. Other attributes such as a national ID number (Social Security, National Health, etc.) are PII all by themselves.
Personal Identifying Information (PII2), on the other hand, is defined as “Information which can be used to distinguish or trace an individual’s identity, such as their name, social security number, biometric records, etc. alone, or when combined with other personal or identifying information which is linked or linkable to a specific individual, such as date and place of birth, mother’s maiden name, etc.” according to the IdESC taxonomy, quoting from The US government’s FICAM Trust Framework Provider Adoption Process ( that’s the US Federal Identity, Credential and Access Management Program).
The difference between the two is subtle, but significant. Under PII1, date of birth – in and of itself – is not PII. Under PII2 it is, even though it doesn’t uniquely identify you. So when creating privacy law or examining privacy issues (as are raised by the US Government’s PRISM program) it is very important to know which definition of PII is being used. Suppose, for example, it becomes illegal to knowingly distribute PII of others. Or, in the context of computer breach situations, it becomes necessary to inform entities when their PII has been leaked or stolen. How do we decide if it really is PII that’s escaped into he wild?
The definition I use, PII1, is entirely in keeping with the work KuppingerCole has done on Information Stewardship, where we differentiate between data and information – “Data is nothing more than the symbols which are processed by the computer. Data, in itself, has no meaning and no value. Information is data with context or processing that makes it useful.” Some attributes (such as date of birth) are simply data, of little use without context or other qualifying data that creates information.
We recognize that the terms we use are not always understood by everyone. In fact, we at KuppingerCole have the added problem of bi-lingual (in our writing) and multi-lingual (in our discussions) use of terms. “Digital Identity” (which Google tells me is “digitale Identität” in German) may have numerous translations each with multiple meanings.
For this reason, almost all KuppingerCole published works include a Glossary section, in which we define the terms used in the paper. This doesn’t mean that the definitions we use are universally accepted nor that other definitions might not be better. It simply means that when the term is used in the publication this is what you should understand it to mean. That way, any subsequent discussion starts off with everyone on the same page, so to speak.
This works well for publications, not so well for impromptu discussions. Maybe we should each create our own personal lexicon/taxonomy/glossary on the ‘net so we could reference it when we Tweet.
04.06.2013 by Dave Kearns
Whenever I talk about passwords these days, or rather the need to end the use of passwords, I really feel I should call myself Cassandra.
In Greek mythology, Cassandra was the daughter of King Priam and Queen Hecuba of Troy. Her beauty caused Apollo to grant her the gift of prophecy. When Cassandra refused Apollo’s attempted seduction, he placed a curse on her so that her predictions and those of all her descendants would not be believed. In the understanding of some modernists, Cassandra’s prophecies were flawed and incapable of coming true. But to the ancients (and in the sense I use it) the prophecies were accurate, amazingly so, but disbelieved because of flawed understanding by the listeners. My listeners frequently site the cost of replacing passwords, or the ease-of-use of passwords for users or even the (unfounded) insecurity of other authentication methods. All of that is true, but also irrelevant. The cost of a data breach, the even easier to use alternative authentication methods and the incorporation of context sensitive risk-based access controls (with the authentication step as simply one factor) all support my thesis.
Fortunately, I’m not a Jeremiah. This Biblical prophet (he’s credited with writing the Book of Jeremiah, 1 Kings, 2 Kings and the Book of Lamentations) was attacked by his own brothers, beaten and put into the stocks by a priest and false prophet, imprisoned by the king, threatened with death, and thrown into a cistern by Judah’s officials. These people didn’t like what he had to say! No one would speak up in Jeremiah’s defense, but a number of people, publications and organizations have taken up the cry to banish passwords as the sole method of authentication for user accounts.
One of the people is Google’s Tim Bray. He’s a developer evangelist for the Mountain View company, often speaking to developers, programmers and coders on best practices. Formerly the Director of Web Technologies at Sun Microsystems, Bray was also one of the main authors of the original XML specification. Speaking at the recent Glue Conference (the foremost independent gathering of developers), Tim was vehement in denouncing passwords. As noted by Ping Identity’s John Fontana:
“Near the beginning of his talk, Bray dropped to his knees, pounded the floor with his fists and sent out a plea to web sites asking that they not force him into creating another password. He implored developers to get on board with emerging identity protocols, namely OAuth 2 and OpenID Connect.”
Bray went on to talk about all of the non-password possibilities for authentication that can be built-in to apps. Fontana reports that Bray told the crowd: “If you go into the password business, you are peeing in the swimming pool,” that is, you’re being anti-social and deserve the scorn of your peers.
One publication that presented stark evidence of the vulnerability of passwords was Ars Technica. In an article called “Anatomy of a hack: How crackers ransack passwords like ‘qeadzcwrsfxv1331’,” they presented the story of three hackers – a developer of cracking software, a security consultant, and an anonymous cracker – given relatively standard PCs, readily available cracking software and 16,449 passwords converted into hashes using the MD5 cryptographic hash function. The person with the most underpowered computer took 20 hours and cracked 14,734 of the hashes, a 90-percent success rate. Another person unscrambled 13,486 hashes (82 percent) in a little more than one hour, using a slightly more powerful machine. The story even cites Ars Technica deputy editor Nate Anderson, never known as a cracker, who deciphered almost half of the passwords in just a couple of hours. Go read the whole article, it’s fascinating – and very scary if you still use passwords, but feel that by hashing and encrypting them that you and your users are safe.
The organization who’s taken a step away from passwords is Amazon.com, specifically Amazon Web Services (AWS), the cloud provider arm of the retailing giant. With a newly released API (didn’t we tell you that the API economy was the coming thing?), developers can use Facebook or Google credentials for authentication as well as the newly released Login with Amazon, a free service that lets third party apps and websites use the online retail giant’s system for authenticating users. If you are interested (and any AWS developer should be), Amazon has published an article including examples of use on it’s AWS web site.
Finally, The UK’s Daily Mail (in the Mail Online) revealed what’s coming on the authentication horizon. Motorola has shown not one, but two very different authentication schemes which may – or may not – ever see the light of day. The first is called the “Biostamp,” an electronic tattoo made of silicon and containing an electrical circuit, antennae and sensors that bend and move with the wearer’s body. The tattoos, designed by Massachusetts-based firm MC10, are intended for medical purposes to track a patient’s health, but Motorola thinks the technology can be used for authentication purposes, as an alternative to traditional passwords.
The second is even stranger – it’s called a ‘vitamin authentication pill,’ and when swallowed it’s powered by acid in the ingestor’s stomach and creates an 18-bit signal picked up by mobile phone. The Proteus Digital Health pill has already been approved by the U.S. Food and Drug Administration and was given European regulatory approval in 2010. It contains a computer chip that can be powered like a battery using the acid in the wearer’s stomach. Once swallowed the ‘vitamin authentication pill’ creates an 18-bit ECG-like signal inside the person’s body that can be picked up by mobile devices and authentication hardware outside the body which could be used verify the wearer is the correct owner of the device or account. It’s claimed that the pill could be taken every day for 30 days, if necessary, without any problems.
So there you have it – more evidence that passwords are bad, more experts exhorting developers to stop using them and more major hardware and software firms offering valid alternatives. Still using passwords? What’s your excuse?
21.05.2013 by Dave Kearns
Another European Identity (and Cloud) Conference has come and gone, and once again it was an exciting week with packed session rooms, and excellent attendance at the evening events. I’m not sure we can continue to call it the “European” Id Conference, though, as I met folks from Australia, New Zealand, Japan, South Africa and all over north and south America. And lots of Europeans, also, I should note. Nor were the attendees content to sit back and soak it all in. At least in the sessions I conducted there was a great deal of give and take between the audience and the speakers and panelists. Most good natured and looking for information but – occasionally – it got a bit raucous.
The track on authentication and authorization – so near and dear to my heart – drew a standing room only crowd who were eager to join in the discussion. As always when AuthN is discussed, passwords drew an inordinate amount of the discussion. I reminded the panelists and the audience that no less a personage than Bill Gates predicted the “death of passwords” back in 2004. And that even within Microsoft, passwords were still in use.
Too much energy is being spent of both trying to remove username/password from the authentication process and in trying to “strengthen” the passwords that are used. Neither approach is going to be effective. Passwords, or the “something you know” are far easier to use than “something you have” (security token) and far less scary than “something you are” (biometrics) for the general public to ever entertain the idea of switching.
Password strength is, essentially, a myth. Brute force attacks become quicker every day, so hacking the password directly becomes easier every day. Phishing attacks are getting so sophisticated that there’s no need to hack a password (and possibly set off security alarms) when you can induce the user to give it to you willingly.
Two factor authentication (2FA) had some champions, but most methods have already been shown to be vulnerable to either direct attacks (man in the middle style, or MIM) or the same phishing attacks that subvert “strong” passwords. The object of the phishing attack is, after all, for the user to login with their credentials which are then subsumed by the hacker. So go three factors if you want – it’s not much stronger.
I found widespread agreement (with a few diehard holdouts) for a context-collecting risk-based system for Access Control (which I’ve called RiskBAC). Knowing the who, what, when, where, how and why of the authentication ceremony leaves the username/password combo as only one of many factors (the who). In fact, entering a username and correct password isn’t the end of the authentication but merely the trigger to begin the Risk-based Access ceremony or transaction. The other factors are all gathered automatically through system dialogs after the entry of the password has identified the account to which the claimant wishes access.
Of course, once we’re satisfied that the claimant is most likely who he/she claims to be, we then take that information into account along with the other contextual elements to determine the degree of access we’ll authorize to the resource they’re seeking.
While the presentation was called “the Future of Authentication and Authorization,” I did remind the audience that over 2000 years ago the Romans used the same methods for access control. Biometrics (what you are) was represented by facial recognition, tokens (what you have) by scrolls sealed with the leader’s ring (early use of a security signature) and passwords were, well passwords – and often changed daily to guard against leaks of the information, something more of us should do today.
There was also a contextual element to the access control ceremony when the guard, on observing the claimant, was able to identify him in the context of where he knew the face from – the morning roll call, or the guardhouse. The sealed scroll had context based on what the guard knew about the location (at the camp or thousands of miles away) and condition (alive and kicking, or breathing his last) of the official who sealed the token.
There were lots of other exciting moments – even aha! Moments – in the tracks I did on Trust Frameworks and Privacy by Design as well as in others’ session especially those on Life Management Platforms, a coming technology that many who were hearing about it for the first time agreed will be game-changing when it arrives – and that may not be too far off. If you’d like to catch up, see the just released Advisory Note: “Life Management Platforms: Control and Privacy for Personal Data” (#70745).
And there was exciting, non-Identity related, news as well. We of course announced EIC 2014 for next May but – remember up at the top of this post I said that it was a larger than European conference? Well we also announced EIC 2014 London, EIC 2014 Toronto and EIC 2014 Singapore. EIC is going worldwide, and the people involved in identity couldn’t be happier. Dates for the new venues haven’t been finalized yet, but I’ll be sure to tell you about them when they are.
07.05.2013 by Dave Kearns
For years we’ve spoken about the 4 “A”s of identity & security – Administration, Authentication, Authorization, and Audit, but maybe it’s time to drop an “A”. Maybe it’s time to speak of “Access Control” which encompasses Authentication (sometimes referred to as “AuthN”) and Authorization (referred to as “authZ”).
In many instances authorization is binary and tied directly to authentication – if a person is authenticated, then they get access to a resource. The authorization is tied only to the authenticated entity. Consider building security, for example – swipe your proximity card and you’re allowed in. Or, in rather more ancient practice, unlock the door with your key and get access. In the former case, the use of the proximity card (the “token”) is probably recorded someplace, so there is at least a rudimentary audit trail. When the key is the “token”, then there is no trail.
Until recently, the same was true concerning access to digital resources – if you authenticated to the system (network, server, application, etc.) then you got access as defined for the username you are using – most typically to a group of resources.
Note that there’s no actual proof that the person being authenticated is the same person for whom that particular account was created. The standard username/password combination that comprises the vast majority of authentication transactions today gives absolutely no assurance that the “proper” user (whatever that means) is the one being granted access. For example, I do password protect my computer (it’s a laptop that travels with me). But my wife knows the password, and has had to use it on rare occasions when I’m not available, but information is needed. The computer has no idea that it’s her and not me who is accessing those resources. Tokens do not improve this situation and biometrics provide only slightly more proof since, in practice, it isn’t the biometric (a picture of your fingerprint, for example) but a key or token created with the parameters of the biometric.
I could, of course, set up a separate account for her so that she could authenticate as herself. But for the purposes she might need to access the PC, she would need at least the exact same authorizations that I have. Creating that second account, though, reduces the security of the system. With two accounts, the risk that a breach could occur is actually doubled – the risk of my account being compromised PLUS the risk that my wife’s account could be.
The usual method of controlling authorizations for a single user is to have multiple authentications for that user, multiple identities if you will. On my Windows system, I need to sometimes authenticate as the Administrative user when I need to access system resources, install/remove software, etc. Most of the time, I authenticate as a User with a more limited set of authorizations. The same is true of ‘nix systems, where the root account is used sparingly, and only when needed. Even within applications, a similar system is observed – most of the time, I would authenticate to a database as a user, but occasionally I need to be the database administrator (DBA) in order to, well, do administrative stuff. Again, in reality, most people don’t do this – although they should – choosing the “ease of use” that authenticating as the more powerful user brings.
The bottom line is that the important thing is the authentication. Get that right (which usually means enter the correct password) and the authorizations flow: it’s all or nothing, black or white, good or bad. But with data breaches, especially the theft of usernames and passwords, seemingly coming more frequently as each day goes by (and you’d think organizations would have learned by now, wouldn’t you?) we need to do something different.
For a dozen years or so what the “thing we need to do” has been identified as is to replace the username/password combination with something “stronger”. But we’ve learned from study after study that there really isn’t anything strong enough – tokens, biometrics, “hardened” passwords are all flawed. While stealing a biometric is tougher than guessing a password, it’s a whole lot more difficult to replace a fingerprint than it is to change passwords.
As I’ve said for many years, and as I hope to re-iterate strongly at the upcoming European Identity & Cloud Conference (EIC), context, as part of a well thought out risk-based access management system, is what we need. Some use the phrase “adaptive authentication” to mean, in essence, a dynamic authentication which may require one, two or more factors depending on the circumstances. Still, this is really just one part of risk-based access control. It’s unfortunate that RBAC has come to mean Role-based AC, so we’ll need to come up with a different term – perhaps Risk Managed Access Control (RMAC).
The authentication continues as we’ve always done it – username/password, token, biometric, what-have-you, singly or in combination – but we collect context data (location, platform, date and time, and so on) and evaluate it giving it a risk metric. Alternatively we could use the inverse and call this a “trust metric” – the amount of trust we have in the validity of the identity of the person attempting the authentication. Based on that metric, we grant authorization on a sliding scale, which can be as fine-grained as your rules engine will allow.
We aren’t there yet, but we need to be. The presentations at this month’s EIC can bring us closer. You really should be there.
23.04.2013 by Dave Kearns
Larry Ponemon, of the Ponemon Institute, is well known for excellent surveys about technology issues. And Larry didn’t disappoint when he recently released “Moving Beyond Passwords: Consumer Attitudes on Online Authentication, A Study of US, UK and German Consumers” (warning: pdf file).
In summary, the report of the survey concludes:
- “The majority of consumers would use a multi-purpose identity credential to verify who they are before providing secure access to data, systems and physical locations.
- Banking institutions are considered the best for online validation and strong authentication and identity verification. Consumers in all countries believe banks would be the best to issue and manage a multi-purpose identity credential.
- The benefits of a multi-purpose identity credential are convenience (US & UK consumers) and security (German consumers). Identification and authentication when traveling, accessing the Internet and using social networks are the most popular reasons to have single ID.
- There is no clear consensus on what devices would be preferred to manage their multipurpose identity credential. However, in the US more consumers would prefer their mobile devices for identification purposes. In the UK, it is RFID chips. German consumers seem to favor biometrics.
- If consumers trust the organization, biometrics is acceptable to use for authentication.
- Voice recognition and facial scan are the most acceptable types of biometric authentication. Least acceptable in the US and UK is an iris scan. In Germany, least favored are fingerprints.
- Authentication is important when sharing devices with other users. The majority of consumers believe it is important to have authentication that securely verifies their identity on devices that are shared with other (multiple) users.”
So what we’re seeing here is that users favor stronger authentication, but also easier to use authentication (thus the preferences for mobile devices, RFID and biometrics as opposed to passwords). There’s also a strong feeling that the identity provider be trustworthy, or be seen as trustworthy: “Industries and organizations considered by consumers in all three countries as most trustworthy to safely issue and manage a multi-purpose identity credential are: banking institutions, credit card and Internet payment providers, telephone, wireless or cable services companies, healthcare providers and postal and delivery services. Least trusted are educational institutions, Internet service providers and retailers. “
The bottom line appears to be that users are looking for ease-of-use coupled with security and trust and these are exactly the issues we will be exploring next month at the European Identity & Cloud Conference (EIC). In particular, I’ll be moderating a track on Authentication & Authorization featuring a detailed look at “Versatile Authentication, Risk- and Context-Based Authentication: Why you need these Concepts”. Risk-based Access Control using context is a subject near and dear to my heart. It appears to be what the consumers in Ponemon’s survey are groping towards, without being able to articulate exactly what they want. It’s also something that seems to be gaining more traction in the marketplace, at least if I can judge by what I’m reading lately.
Chris Zannetos, CEO of Courion, recently wrote a blog post called “Context is everything”. In this look at what he calls “security intelligence,” Zannetos says:
“The activity and traffic monitors such as SIEM and deep packet inspection products have been looking at streams of information flows without the context to make sense of them. This is a bit like analyzing a baseball game by looking only at the types of pitches and result (hit, walk, out) — without understanding who is pitching, who is up to bat, what their past patterns have been, the ballpark, or the weather. In other words, the ‘Moneyball’ factor has been missing.”
< for my non-North American readers, substitute “football” (or “futbol”) for “baseball”>
And, of course, context is about more than a single packet – it’s the Who, What, When, Where, Why, and How of a transaction. Chris even alludes to a deeper context – the history of the context of similar transactions, which should be included in the analysis much like a Bayesian spam filter is used with email.
The second piece I read about context was from Jeff Rosenberg, a technical instructor in the Client Services group at Ping Identity. He didn’t use the word “context” in his blog entry called “Identity as a Rental (IDaaR),” but he did describe context-based authentication when he wrote:
“Did the user authenticate via password, certificate or one-time code? Is this user within the corporate network or coming in externally? Which training level or security clearance is required? Perhaps attribute-level permission is involved, such as LDAP group membership. When these questions are satisfied, the user checks out and the service is provided.”
Rosenberg then goes on to talk about the short-term use of particular attributes which are appropriate for the context of a given transaction, but that’s more appropriate for KuppingerCole’s discussions of Life Management Platforms, another subject that will be well covered at EIC next month.
Context, as a contributor to Risk-based Access Control, as collected for SIEM and for packaging identity attributes for short-term use is definitely a winner. And it is readily – and easily – available to most of you who use some form of SAML-based authentication/authorization system. You might wish to read (if you’ve nothing else to do right now) “Authentication Context for the OASIS Security Assertion Markup Language (SAML) V2.0” (another PDF file), all 70 pages of it.
But for today, the introduction should be sufficient: “If a relying party is to rely on the authentication of a principal by an authentication authority, the relying party may require information additional to the assertion itself in order to assess the level of confidence they can place in that assertion. This specification defines an XML Schema for the creation of Authentication Context declarations – XML documents that allow the authentication authority to provide to the relying party this additional information. Additionally, this specification defines a number of Authentication Context classes; categories into which many Authentication Context declarations will fall, thereby simplifying their interpretation.” In other words, this is a way to provide context to the transaction. Once you take context into account, then allowing a simple, easy-to-use factor (password, fingerprint, hardware token, etc.) is no longer a problem. Guessing someone’s password doesn’t get you the context in which it’s used and thus raises the risk factor for that transaction.
We have the tools, all we need is the effort to provide more secure, yet easy-to-use authentication ceremonies. What’s stopping us? Let’s talk about that at EIC next month in Munich.
09.04.2013 by Dave Kearns
2FA, it’s an abbreviation (word? acronym?) I see a lot these days. But it’s not, as I first thought, teenage texting slang (“OMG, that’s 2FA!”) for “too freakin’ amazing”. No, it’s a shortened version of “two factor authentication” which has been a hot topic and buzzword since Google announced it (although they call it “two step verification”) after the now infamous case of hacking which struck Wired magazine’s Mat Honan (see “The Honan Hack and the BYOI meme”) last summer. Suddenly everyone is writing about 2FA. Of course, they rarely mention that two weak factors can be worse than one strong factor, e.g. Google.
But two-factor authentication is really only one case within the more established paradigm of multi-factor authentication (MFA), where “multi” stands for “more than one” and might be two but could be three, four or more. And multi-factor authentication is hardly the new kid on the block – I’ve been writing about it since last century.
Yes, it was in January, 2000 that I wrote two newsletters about Novell’s new release, NMAS – Novell Modular Authentication Services. As I said at the time:
“NMAS lets network administrators choose among different authentication methods, including traditional password control and adding biometric and smart card methods. While biometric and smart card access isn’t new, it’s the control over the methods used, and the subsequent access granted, which makes NMAS a major addition to NDS security.”
With NMAS, you could specify one, two or even three factors to use for authentication and the factors could be any of What you know (password), What you have (smart card) or What you are (biometric). Depending on the factor or factors used, the administrator could further restrict the user’s access rights. Nice to see that Google, Apple and others are finally climbing onto the MFA bandwagon.
MFA is, of course, an integral part of Risk-Based Access Control (RBAC) especially when it can be optionally used depending on the risk factors involved in an authentication session.
You’ll remember, I hope (if not, go read “Passwords & Tokens & Eye Scans, Oh My!,” we’ll wait) that the calculated risk factor for an authentication/authorization event can be used to trigger multiple factors for verification in the authentication ceremony. It might simply be that someone is requesting access to high value resources, or they may be requesting access from an unfamiliar location or platform. It could simply be that the access requested is not within the user’s standard pattern of time of day or time of year (e.g., tax season). Whatever the case, a calculation of high risk should lead to multi-factor authentication for that user at that time.
In some cases (attempts to login as root or admin, for example) you should always look to MFA because the risk is always going to be high.
But it’s not just hardware tokens, biometrics and passwords that should make up the MFA mix. A lot of the contextual items you look at when evaluating risk can also be considered a 2nd (or 3rd) factor in the authentication ceremony.
If, for example, the user is accessing the network from their typical endpoint (office desktop PC, home pc, laptop, smartphone, etc.) then that can count almost as much as a hardware token. If your system then sends an out-of-band SMS to the user with a one-time password (OTP) to be entered during authentication, you might say this was a 3FA.
But how secure is 2FA, or MFA?
Noted security expert Bruce Schneier wrote (back in 2009, and referenced something else he wrote in 2005!) about hacking two-factor authentication and noted
“Here are two new active attacks we’re starting to see:
Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank’s real website. Done right, the user will never realize that he isn’t at the bank’s website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.
Trojan attack. Attacker gets Trojan installed on user’s computer. When user logs into his bank’s website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.”
Then why does everyone, it seems, believe that using two factors for authentication is better than using only one? It’s simple, if implemented properly, 2FA does reduce the risk of unauthorized access. Let’s say that the risk of unauthorized access using just a password is 1 chance in 20 (5%), which is probably a little high. Then let’s say that the risk when using a different factor (let’s say a hardware token) is lower, perhaps 1 in 1000 (.1%). What’s the risk when both are used? Statistical theory says you multiply the first factor (5%) by the second (.1%) which yields .005%, or 1 in 20,000 – a much better risk factor, I think you’ll agree! Of course, if you use a higher risk second factor (say, 1% or 1 in one hundred) then the overall risk is 1 in 2000 (5% times 1%) which isn’t as secure as the hardware token we postulated.
The important thing to remember, though, is that you need to set a realistic risk factor for each authentication factor in your ceremony. The same realistic view should also govern how you look at the various context factors when weighing the risk involved in any particular transaction.
The bottom line is that it’s all about the risk, and your job is to minimize the risk either through strengthened authentication protocols or through reduced authorization rights – or both. I’ll be going into more depth on this when I present “Versatile Authentication, Risk- and Context-Based Authentication: Why you need these Concepts” along with some lively panel discussion on the topic at the European Identity & Cloud Conference 2013 coming up next month. I hope you’ll be there.
26.03.2013 by Dave Kearns
Two documents crossed my desk this week – a survey and a “planning guide” – which fit nicely with two recent papers from KuppingerCole, illustrating a need and (unknowingly) confirming our conclusions.
The first is about the current buzzword acronym BYOD (for “Bring Your Own Device”) which my colleague Martin Kuppinger just released an advisory note about (“today it’s almost exclusively mobile devices – smartphones, tablets, ‘phablets,’ etc. – that are referred to with BYOD: a focus that is too narrow…”) but which appears to be with us at least for the near term. The new piece is a survey, commissioned by a group of Cisco partner firms led by Pine Cove (based in Billings, MT).
For this study, The group of Cisco partner firms used a randomized online sampling of full-time American workers. The group analyzed 1,000 responses. The survey population for Americans employed full-time who own a smartphone is roughly 53 million, according to the Bureau of Labor Statistics and the Pew Internet & American Life project. The margin of error of the study is 3 percent.
Among the interesting findings of the study:
- 62% of U.S. employees who use their own smartphone for work do so everyday;
- 92% of U.S. employees who use their smartphones for work did so this week;
- Only 1 in 10 workers get some kind of work stipend for their smartphone;
- 39% of workers who use personal smartphones for work don’t password protect them;
- 52% access unsecured wifi networks;
- 69% of BYODers are expected to access work emails after hours.
The one glaring number, to me, is that only 10% of the workers who use their smart device to help them work receive any sort of compensation from their employer. What that tells me is that these American organizations are not supplying their employees with the tools they need to do their job efficiently and productively.
The bottom line, though, is that IT departments should not still be discussing whether or not to support BYOD – the devices are going to be used either way. If IT is going to serve the enterprise and protect it’s resources then IT needs to quickly develop additions to their end-point management plan which covers smart devices and also quickly develop policies to bring these devices into the Information Stewardship practice of the organization.
And speaking of Information Stewardship, I just released a white paper called Using Information Stewardship within Government to Protect PII, an offshoot of the advisory note From Data Leakage Prevention (DLP) to Information Stewardship released last fall by my colleague Mike Small and myself. But what excited me was a guide written for The Online Trust Alliance (OTA) called the 2013 Data Protection & Breach Readiness Guide.
The OTA describes its mission as “to enhance online trust and the protection of users’ security, privacy and identity, while promoting innovation and the vitality of the Internet.”
One caveat when viewing the report: the OTA still uses the term “data” where we at KuppingerCole prefer “Information”. As we’ve said, “Loss or leakage of data is not necessarily a loss of information – understanding the difference between data and information is important to ensure protection.” Data might simply be a list of passwords. As such, it’s no more useful than a dictionary. But a list of usernames AND passwords – that’s information, and that could be a problem should it be leaked into the wild. So, if you read the OTA report, remember that when they speak of data they really mean information.
I bring this up because a large part of the report deals with what the OTA calls “Data Lifecycle Management & Stewardship.” As the report notes:
“OTA advocates the need to create a data lifecycle strategy and incident response plan, evaluating data from acquisition through use, storage and destruction. A key to successful data lifecycle management is balancing regulatory requirements with business needs and consumer expectations. Success is moving from a perspective of compliance, the minimum of requirements, to one of stewardship where companies meet the expectations of consumers.”
Of course, this is exactly what Mike and I outlined as good Information Stewardship.
Further, the report bolsters some of our own conclusions when it notes that “Businesses need to continually evaluate the data through each phase [of the lifecycle] and accept four fundamental truths:
- Privacy and use polices need to be continually reviewed and updated.
- The data they collect includes some form of personally identifiable information (PII).
- If a business collects data it will experience a data loss incident at some point.
- Data stewardship is everyone’s responsibility.
These four principles need to be a part of every organization’s Information Stewardship policy.
It’s now long past time for analysts and pundits to be telling you that you need an Information Stewardship policy. It’s also long past the time that you need to incorporate smart, mobile devices into your endpoint policies – and not as a separate “BYOD” policy. Your endpoint strategy should cover these devices along with desktop/laptop machines in the office, at home, and “on the road” (i.e., internet cafes).
If you’re a KuppingerCole client, ask your representative how we can help. If you aren’t – why aren’t you? And, either way, be sure to plan on being at EIC 2013, where BYOD and Information Stewardship will feature prominently.
12.03.2013 by Dave Kearns
Recently the Massachusetts Institute of Technology (MIT) held a conference on the “Future of Health and Wellness.” One of the major takeaways from the conference (according to CIO magazine) was “6 Innovations That Will Change Healthcare.” These are:
- Reality Mining: Using Data to Influence Healthy Behavior
- Social Networking: For Best Results, Group Like-Minded People
- Usability: Give Users Something Familiar
- Home Care: Make It Easy, Involve Everyone
- Emotion Sensors: For the Willing, Anything Can Be Monitored
- Wellness Counseling: Sometimes, People Like Talking to Computers
That’s certainly a good list, but hardly ground-breaking I suspect. And none of these address the problem I encountered in moving from California to Maryland a couple of years ago which necessitated a change in health providers.
Both my California Health Maintenance Organization (HMO), the Palo Alto Medical Foundation (PAMF), and my Maryland HMO, Kaiser-Permanente (KP), have good on-line presence. Both allow me to make appointments, see test results, contact my medical practitioners, re-authorize prescriptions and more through my web browser. All good, but Kaiser should definitely plan on an upgrade to address a number of usability issues.
What astounded me, though, was that there was no way to transfer information – test results, diagnoses, prescription information, etc. – from PAMF to KP! Nor, for that matter, was it possible to go in the other direction. Likewise, I could not give my KP primary care physician a “proxy” to see my PAMF data. All I could do was to bring up the data in the PAMF site, highlight, copy and paste into a word processor document, then print it and carry it to my KP physician.
That does seem like a 19th century answer to a 21st century problem. But I wouldn’t have brought it up if I didn’t have an idea about how to solve it, and the answer is LMP – Life Management Platforms.
My colleague Martin Kuppinger, in his Advisory Note “Life Management Platforms: Control and Privacy for Personal Data, ” says that: “Life Management Platforms will change the way individuals deal with sensitive information like their health data.”
Now one reason why I can’t easily port data from one healthcare provider to another is the requirements of the US Health Insurance Portability and Accountability Act (HIPAA) of 1996. Two major elements of that act are privacy and security. Summaries (if you can call a 25 page document a “summary”) are available from the US Department of Health and Human Services.
The privacy summary states: “A major goal of the Privacy Rule is to assure that individuals’ health information is properly protected while allowing the flow of health information needed to provide and promote high quality health care and to protect the public’s health and well being,” which is all well and good, but almost all healthcare providers have chosen to not allow the actual individuals described by the data to really have much control in its use – except, as I noted, using 19th century tools.
Now as Martin will tell you (and keep telling you until the concept sinks in) Life Management Platforms are much more than Personal Data Stores, much more than some personal database of attributes and identifiers. While the LMP does integrate with a full panoply of attributes and identifiers it also – and, indeed, must – interact with what another colleague, Craig Burton, calls “the API Economy.” With an open, accessible Application Programming Interface (API), My LMP could interact with each of my healthcare providers, move data to my personal health record datastore and move it to another healthcare provider as needed. For emergency situations I could foreseeably have an API connection between my healthcare data and an organization such as the MedicAlert Foundation so that up-to-the-minute information was available to first responders and other emergency service providers through the contact details on my MedicAlert bracelet or pendant – whether I was conscious or not.
Of course, most of the “6 innovations” CIO magazine was touting as necessary for modern healthcare are also easily enabled with Life Management Platforms:
Reality Mining (Innovation #1): Using Data to Influence Healthy Behavior – a mobile device collects data, stores it in your health record, then combines it with very large data sets (perhaps with a cloud based service) to correlate activities/lifestyles with both good and bad health effects. Another API-based service can then take this data and formulate a personalized health plan (see innovation #6), or suggest social networking opportunities to reinforce that health plan (Innovation #2).
Usability (Innovation #3): Give Users Something Familiar – the LMP provides a common user interface for all of its functions – a major improvement over today’s mish-mosh of services and applications designed only to stand on their own.
Home Care (Innovation #4): Make It Easy, Involve Everyone – using open APIs, caregivers as well as emergency monitors and providers could have the information they need – and, from a privacy perspective, only the information they need – to insure continued good health without the need for early institutionalization. Health monitoring systems, by the way, as part of the Internet of Things, can easily interface with the Life Management Platform thus satisfying Innovation #5, Emotion Sensors: For the Willing, Anything Can Be Monitored.
Wellness Counseling (Innovation #6): Sometimes, People Like Talking to Computers – this almost perfectly describes a relatively easy to implement LMP service that would ensure individuals understand their prognosis and course of action, but also take a major role in implementing those activities.
Hopefully you’ve seen that Life Management Platforms are just what the doctor ordered for 21st century healthcare. But the really amazing part is that healthcare is only one small area in which LMPs can improve our lives. Banking, insurance, shopping, travel, lifestyle, food, social and work responsibilities – almost everything we do in our lives can be enhanced through a Life Management Platform.
We’ll be covering LMPs (and the API Economy as well as the Internet of Things) in more depth at the European Identity and Cloud Conference in May. I hope to see you there.