27.08.2013 by Dave Kearns
I subscribe to a clipping service which delivers pointers to web documents to my inbox for items I’m interested in (described by keywords). Just last week, it pulled in a document about “The Changing role of the Access Control credential.” Well, access control is one of my favorite topics, so I jumped right in to it. Only later did I discover that the article in question had actually been published almost a year ago. (The clipping service is always adding new sources, and scans all of the available documents on the new source so – from time to time – I did get somewhat outdated links)
Still, it was interesting, especially when I read: “The next generation of access control credentials are expected to do more than provide door access.” I chuckled, because I’d written something very similar a decade ago when, after that year’s RSA Conference I commented on the fact that all the security vendors wanted to talk about was converging physical and logical access (that is, building and computer access) via smartcards and proximity cards. The work did start back then, but it evidently hasn’t progressed very far.
Back then, RFID (Radio Frequency Identification) was seen as the technology that would make it all happen. Some were touting it as the final nail in the coffin of passwords. You may have noticed that that didn’t actually happen.
I closely followed one experiment with the RFID enabled proximity cards for both physical and logical access. The HMO (Health Maintenance Organization) I belonged to issued cards to their doctors and other practitioners. When I visited my doctor he only had to sit down at the keyboard in the examining room and was automatically logged in to the system and able to pull up my records with no further authentication needed. The same card, when in the vicinity of a locked door that the doctor was authorized to use, would open the lock so he could go through. Initially, it seemed like a very good system.
That is, until the day I noticed that the computer in the examining room in which I was waiting kept jumping from the “please log in” screen to one which appeared to give me access to patient records! There was obviously no shielding in the walls, and a doctor accessing the computer in the next room was also activating the one in the room I was in. I pointed this out to my doctor when he arrived and, I was pleased to notice, the next time I visited the office that system was gone.
RFID technology can be read from 1 meter to 100 meters away depending on the frequency used. One meter is far too short for something to be used for continuous logical access (one of the best uses of proximity cards for computers – authorization happens when the card is read and the session is ended when the card is removed from proximity) – imagine if your screen went blank whenever you leaned back! Longer ranges, though, bring up the image of all the PCs in your office responding (and logging you in, then out) as you walk down the hallway.
Using such a card as a single factor authentication device also presents the problem of lost or stolen cards – there’s no check on what someone else can do if their holding your credentials.
Over the years, Near Field Communication (NFC) technology has gained popularity, especially as compared to RFID. The major benefit is that NFC allows two-way communication (RFID is limited to one-way). The NFC device can also store data (such as authorizations). Given the two-way nature, whenever the card is used – either for physical or logical access – that data can be updated, modified or removed, or even have new authorizations added. Still, having possession of the device is all that’s necessary for the authorization to work.
The answer there, of course, is two-factor authentication (2FA) which is gaining ground through the NFC-enablement of smartphones. It’s estimated that 2.3 percent of cellular handsets, or 35.4 million, shipped in 2011 were NFC-enabled. In 2014, the NFC attach rate is projected to reach around 20 percent and in 2016, 44 percent. This equates to shipments of 377 million NFC-enabled handsets in 2014, and 918 million in 2016. And those are conservative estimates.
Having to authenticate to the device, either through the use of a PIN or with the ever increasing number of phones that read fingerprints, faceprints and even voiceprints, before using it as a proximity device overcomes most of the problems of a misplaced device. A stolen one, of course, could be immediately de-provisioned as soon as it was reported and wiped the first time it was used after that.
It isn’t too farfetched to believe that the NFC-enabled smartphone will become the cornerstone of the Life Management Platform of the future.
Register now for my upcoming webinar “Authorization as a Calculated Risk” coming up on September 26 where we’ll explore the future world of access control. I’ll be joined by some surprise guests from the industry who can deliver unique insights on current developments in the field.
13.08.2013 by Dave Kearns
It was just a couple of months ago that trend-watcher The Next Web announced that Google Chrome had overtaken Microsoft’s Internet Explorer as the Web’s most used browser, a position that IE had held since, well, way back in the last century.
So it’s unfortunate that just last week it was revealed (yet again) that Chrome is not very protective of stored passwords.
According to a story in the UK’s Telegraph newspaper, a security flaw in Google’s Chrome browser allows anyone with access to a user’s computer to see all of their stored passwords directly from the settings panel.
Software developer Elliott Kember discovered that simply typing “chrome://settings/passwords” into the URL/search bar brings up a list of all stored passwords. While the passwords come up obscured (like: ftp://idmjournal.com dkearns ********), clicking the line brings up a “show” button, and clicking that button reveals the password! Even more shocking, Kember found there’s no way to require a “master password” that can be invoked before seeing the individual ones. Anyone with access to your computer can bring up Chrome and see all of your passwords – and the associated usernames.
Even worse, Kember found that when migrating to Chrome (he was moving from Safari, but moving from IE or Firefox works the same way) you have to import all of the saved passwords and they will all be available no matter if you want them to be or not.
Note that Ping Identity’s John Fontana just wrote about this problem and stated that it was first brought to light in 2008!
In a world where we seem to spend an inordinate amount of time talking about strong authentication, flaws like this show that no matter how “strong” we think our passwords and passphrases are they’re completely vulnerable to a flawed system. A password like “=43-9 ut0193q$#qrgvDFG1235Avpr” is no more likely to be safe than “password1”.
The first thing you’ll want to do, of course, is remove all of your passwords from Chrome. Make a note of them, and then delete them.
Next, you’ll need a better “password vault”. As I noted last fall, for many years I used a browser add-in called “Sxipper,” developed by Dick Hardt who was a co-founder of OpenID. Sxipper was not only a tool to remember usernames and passwords (as well as all the details needed to fill out forms) but was also a password generator, creating randomized groupings of letters, numerals and other characters that were well past the ability of most users to remember. But, of course, they didn’t need to remember them – Sxipper did it for them. Sxipper could save a file containing all of your data to local storage (in case there was ever a problem) but, like Chrome, this wasn’t encrypted, nor was authentication required to access Sxipper once your computer was up and running (i.e., authenticate to the OS and you could run Sxipper). Sxipper was officially killed early this year.
Even before that, though, I’d switched to using Chipdrive MyKey from SCM (now Identiv). Besides encrypting the archive file, it uses a USB stick which makes the service portable among all of your USB-enabled devices. It doesn’t, unfortunately, create passwords so I do need to be disciplined about that but it does protect the encrypted password file with a PIN. While it’s true that someone using my computer could use MyKey to authenticate, they wouldn’t be able to obtain a list of my passwords. Of course, they could login to a site and change the password, so there are potential problems.
So what can you do?
It’s simple, stop using passwords to authenticate.
Wait, you say, that isn’t my choice, is it? Well, it could be.
One of the most popular password vaults is LastPass, which encrypts your passwords and requires a master password to change them. It also syncs passwords cross all of your devices. But the real kicker – although it’s a pay-to-play premium option – is the ability to add two factor authentication (2FA) to any site you visit.
LastPass has teamed up with Yubico to enable 2FA using the Yubikey hardware token. In practice, you use the Yubikey to authenticate to LastPass which then authenticates you with the correct username/password combination. Someone gaining control of your computer would still be locked out of password-protected sites if they didn’t have your Yubikey. A single user Yubikey plus a one year subscription to LastPass premium will set you back a minimum of $33. There are bundles for enterprise customers, but no great savings.
While you still need a password to login, I see this as merely an indication of the account you wish to access (the username+password combination) with the token providing the actual authentication mechanism. That means you really don’t have to go for long, involved passwords – “password1” would be as secure as a 2000 character bit of nonsense.
Slowly, major web properties are moving to two factor authentication. Google did this some time ago, Twitter just recently announced it. But most of these rely on SMS messages as the second factor and, as last year’s Eurograbber attack was shown to have stolen over 36 million Euros through a sophisticated man-in-the-middle attack against SMS systems, this might not be your best choice. As security vendor Checkpoint described the exploit:
- Target goes to malware website, probably by clicking a link in a phishing email
- Target later goes to bank web site and malware injects instructions into the session that prompts the customer to enter their mobile phone number
- Target is instructed (seemingly by bank) to download “security update” to phone which is actually a variant of the “Zeus in the mobile” (ZITMO) Trojan.
- The Trojan then intercepts the bank’s SMS containing the all-important “transaction authorization number” (TAN), the number the target has to enter in the bank’s web site authentication screen. The Eurograbber Trojan on the customer’s mobile device intercepts the SMS and uses the TAN to complete its own transaction to silently transfer money out of the bank customer’s account.
This all happens quietly in the background with the target only becoming aware when they notice their depleted bank account.
The Google and Twitter 2FA offerings are free, while the Yubico+LasstPass will set you back $33 the first year, and approx. $12 each subsequent year. On the other hand, I think in this case you really do get what you pay for.
Do note that there are some open source possibilities which have this functionality to a greater or lesser extent. KeePass (a password manager) can be used with the add-on OtpKeyProv which provides one time password access to the KeyPass database. A modification of KeePass, called Web KeyPass, can be used with WiKID’s 2FA system – directions for this are here. Both of these solutions require a bit of DIY, but it could be educational for you to try to do that.
30.07.2013 by Dave Kearns
We had a lively webinar last week on “The Future of Authentication and Authorization”. If you missed, you can watch the replay. Essentially, what I was talking about was context aware Risk Based Access Control (RiskBAC).
The day after the webinar, I got involved in a very lively Twitter chat with a handful of the Identirati/Identorati (some spell it one way, some the other, but it’s the collective term for those in the Identity business in one way or another) about attributes, Attribute Providers (APs), Identity Providers (IdPs) and Relying Parties (RPs).
So how are these related?
Context information, which I noted could also be referred to as “metadata” (a term much in the news lately, see “Metadata: Separating fact from fiction” in the Toronto Star) is a part of the collection of attributes surrounding an identity (or “digital identity” as some would have it). As the Star’s article notes, “The truth is that collecting metadata can actually be more revealing than accessing the content of our communications.” This was neatly summed up by Ping Identity CEO Andre Durand when he said “the sum of the correlation between attributes is greater than the sum of the raw attributes themselves. “ But he said that at Digital ID World 10 years ago!
Durand later addressed this issue again when he wrote: “I read a great Editor’s note in CIO Insight this month on the effective end of privacy as corporations build massive customer databases in an attempt to better understand how, who, when and what to sell to people. In federation terms, I call this ‘attribute-hording’, the concept that companies aggregate our attributes and then leverage the aggregation of these attributes to build ever more complex algorithms for predicting our behavior.” And he wrote that in 2004.
Yes, we should have been worries about “big data” back in 2004. With the advent of the US Patriot Act (which actually was passed in 2001, then renewed in 2011) a few forward thinkers (he says, patting himself on the back) noted at that time that the very things revealed by Edward Snowden were not only possible, but likely.
Still, the revelations and the subsequent discussions about metadata make it possible to talk about context and attributes without having to offer a lengthy explanation of them.
Identity Providers (IdPs) should be familiar to most of you. Web sites such as Google, Facebook, Yahoo!, PayPal, Amazon, etc. offer authentication services (usually through OpenID Connect and the OAuth protocol). When you visit many websites, you are prompted to “login” with your Google/Facebook/PayPal/etc. ID. Generally, these sites hold some attribute/value information, but it’s also somewhat limited. (Neither Google nor Facebook know your street address or social security number, for example). Nevertheless, it’s quite possible for them to form partnerships with other web-based enterprises who do hold some attribute/value information about you. Google offers a good explanation of this in an article called an “Overview of Attribute Providers,” with the following example:
“The popular TV channel HBO operates a website today called hbogo.com that can be used to watch HBO movies. However users have to first login and provide a trustworthy assertion that HBO can use to confirm the person is paying for an HBO subscription through a cable operator. Imagine that a user who visited HBO simply saw a list of popular identity providers, and chose theirs, such as Google. Google would then ask the user for their consent to share some of their information with hbogo.com. In the future that information might include their email address and their street address. If the user gives their consent, then hbogo.com can contact the cable operators who serve the area around the user’s house, and ask if that household has a paying subscription to HBO. If so, then with two simple clicks a user will be able to watch HBO movies on their computer.
Before the user’s identity provider can help the user assert their street address to hbogo.com, the user needs to have first selected an attribute provider who can validate their street address, and link that attribute provider to their identity provider. Companies may even compete to be a user’s attribute provider. For example when a user logs into their online banking service, they may see a promotion to use that bank as the attribute provider of their street address. If they click that promotion, the banks would explain why the user might want to do this, and if the user gives their approval then the bank would redirect the user to their identity provider. The identity provider would then ask the user to consent to using this bank as the attribute provider of their street address.”
Now there’s no question that Attribute Providers will be important in the future, especially in the future of context aware risk-based access control. Some people, in fact, believe that relying parties should connect directly to the APs, bypassing the Identity Providers, since “Identity” (as we noted above) is simply a set of attributes – specifically, the set of attributes that makes you, as an entity, unique within a given namespace (your town, amazon.com, registered EIC attendees, etc.). As an example, I’m not the only David Kearns within my city, nor within my town, nor within my postal code. So saying “David Kearns, Montgomery Village, Gaithersburg, 20886 Maryland USA” isn’t unique (my son has those same attributes). Include my age, my street address, my phone number, the registration on my car, etc. and it does become unique. But there are a number of different Attribute Providers involved in gathering those values. Asking every relying party to connect with every attribute provider is really asking too much.
Identity Providers, though, are ideally placed to interact with users, APs and RPs. In fact, that’s the business of an IdP. So that when I wish to do business with a new relying party, one which accepts authentication from an IdP I use, the RP can request what it considers the necessary and sufficient attributes and values concerning me and the IdP can quickly and easily verify the current values while at the same time getting my consent to reveal them.
That’s a big part of the future of Access Control, and we’ll explore it in more detail in September in a new webinar called “Authorization as a Calculated Risk.” More details as they become available.
16.07.2013 by Dave Kearns
At last week’s fourth annual Cloud Identity Summit (founded and curated by Ping Identity) people were still buzzing about the hornets’ nest we had stirred up a year earlier at the third summit when we baldly proclaimed “SAML is dead”.
SAML, the Security Assertion Markup Language, is part and parcel of the Ping Identity federation products. For the last twelve months I’ve been inundated with examples (many from Ping employees) of how SAML is still being implemented today.
Of course, as I noted at the time, the presentation was called “The Future of Authentication” and the context of the bald statement was: “SAML is dead does not mean SAML is bad. SAML is dead does not mean SAML isn’t useful. SAML is dead means SAML is not the future.”
So in that context it was very interesting to hear what Ping Chief Technology Officer Patrick Harding had to say in his keynote to this year’s Summit. He told his audience that what is needed is “a modernized identity protocol stack that is baked into every application that scales to Internet proportions, and hides its complexity from developers and end-users.” He went on to say that the foundation for this “identity stack” is shaped by a trio of emerging protocols — OAuth, OpenID Connect (OIDC) and the System for Cross-Domain Identity Management (SCIM).
You’ll note the decided absence of SAML.
In fact, in the session a year ago we essentially said the same thing: OpenID Connect and OAuth 2 were the foundation for the future of authentication and authorization.
Harding also emphasized that in the coming API economy (another favorite of the analysts at KuppingerCole) it would not be about identity-enabled APIs with access tokens in their requests, but specific APIs for identity. The key, according to Patrick, is simplifying everything as much as possible. “It is automation for developers, for end-users, we have to eliminate all the friction here,” he said. “Developers should not have to know how OIDC and SCIM work.” Instead, they should simply call an API just as they do for other services within their applications.
As we stated when we awarded the OpenID Foundation a 2012 European Identity Award:
“OpenID Connect is a simple JSON/REST-based interoperable identity protocol built on top of the OAuth 2.0 family of specifications. Its design philosophy is ‘make simple things simple and make complicated things possible’.
While OAuth 2.0 is a generic access authorization delegation protocol, thus enabling the transfer of arbitrary data, it does not define ways to authenticate users or communicate information about them. OpenID Connect provides a secure, flexible, and interoperable identity layer on top of OAuth 2.0 so that digital identities can be easily used across sites and applications.”
OpenID Connect allows a user to authenticate to an App, a service or a site (generically termed a Relying Party, or RP) using an identity establish with another system, called the Identity Provider (IdP). Well known IdP’s include Google and Facebook.
While it’s true that wider implementation of the “identity layer” Harding outlined in his talk would be greatly beneficial to users by vastly reducing the number of login ceremonies they’d be presented with (and, quite likely, the number of passwords to remember) there is an unspoken problem – the “elephant in the room” as it were. That is that the user must, at some point, authenticate to the Identity Provider!
Today, that means logging in to Google or Facebook or other IdP by providing, you guessed it, an account name and a password. Some IdPs (like Google) have provided for the ability to use two factor authentication (2FA) but it is optional and – from a security perspective – not much safer than a password alone.
The communications industries speak of “the last mile” (or last kilometer) as the most difficult part of their infrastructure – the connection from the backbone network to the individual subscriber. It is, typically, the slowest, most difficult – and most expensive on a per user basis – part of the network. That login to the IdP is the “last mile” for the identity stack that Mr. Harding (and Craig Burton before him) was talking about as our necessary future.
What we need, really, is an adaptive, dynamic, risk-based authentication and authorization system, based on context derived via open APIs, to correctly and properly authenticate those seeking access and determine the proper level of authorization for them.
If you are at all interested in this, you’ll want to join me for two upcoming webinars.
Next week, I’ll be discussing in greater depth “the Future of Authentication and Authorization,” and expanding on the thoughts I’ve laid out here. Then, in September, we’ll go into detail about how to bring about that future when I present “Authorization as a Calculated Risk.” (link to follow).
The authentication train is leaving the station, headed to the future. You really want to be on board.
02.07.2013 by Dave Kearns
I recently saw a query from a major international consumer goods company headquartered in Europe asking “…we have a Privacy Officer, but we are also looking into the possibility for an Identity Officer, someone who is knowledgeable about SSO, identity and so forth. Unfortunately we are fairly new to the entire identity sphere.” It was an interesting question, so I started a dialog with my colleagues at KuppingerCole to see if we could come to some agreement about the need for such a position.
Martin Kuppinger voiced the thought many of us had when he said “Yet another ‘Chief whatever Officer’?” And it’s true the list of CxOs is beginning to read like the list of vice presidents of a US bank (a quick search on LinkedIn for those holding the current title “senior vice president” associated with Bank of America turns up 4706 entries!)
Since the questioner mentioned that they had a “privacy officer”, I searched to see if this was, in fact, a CPO. But it’s not. Questions about privacy are to be sent to the world headquarters “attn: Privacy Officer” or to any of their national HQs with the same notation. To me that means there’s someone in each office who is tasked with tracking privacy issues. Could an “identity officer” do the same? Since it doesn’t appear that the company wants to create yet another CxO, I decided to explore the possibilities of a privacy officer further.
A non-exhaustive search on Google brought up only one company with a person having the title “identity officer.” I did ignore the few references to people who were in marketing and were charged with creating or fostering “brand identity”: that is, the way customers/clients/consumers view the company. Palo Alto’s IdentityMind announced a Chief Identity Officer early last year. As it turns out, though, Dr. Taher Elgamal (the newly named CIdO) was to be more outward looking than to be concerned with internal identity issues. He’s quoted in the announcement as saying: “With IdentityMind we have developed a technology that allows us to establish the identity in the context of an Internet payment transaction with high degree of certainty. In this new role I now have the ability to focus on working with the Industry in expanding this definition and bridging the gap between users and their Internet identities.” So, still not what the gentleman who asked the question was looking for.
Further searching turned up a couple of postings/musings by people I follow in the Identity arena: Matt Flynn (currently with Oracle, previously with MaXware, RSA, Netvision and StealthBITS) and Matt Pollicove (currently with Commercium Technology specializing in SAP Security, previously with MaXware, Mycroft and Secude). While were initially writing in terms of a Chief Identity Officer, their thoughts are still valid for a lower level Identity Officer – even if they both wrote about this 7 years ago!
Unfortunately, no one seems to have taken up this conversation, nor done anything about instituting such a position since then. Maybe the time wasn’t ripe. Maybe now is the right time.
Pollicove, who started the conversation, stated he was doing it because “I am constantly thinking about how to make Identity Management a larger part of the enterprise, not only because it makes sense from [a] security and compliance [perspective], but because good, clean, organized IdM data results in a better running organization.” In other words, he thought IdM needed more visibility in the enterprise and that a CIdO would go a long way towards gaining this. Now I think we can agree that the visibility of Identity has risen, and risen considerably, within the enterprise without having a CIdO, but it’s generally risen for negative reasons (data breaches, cyber-snooping, etc.) rather than for, as Matt hoped, “a better running organization.”
Matt Flynn went a little further: “I agree with Matt that one owner would certainly make IdM projects easier to manage, but that’s not the greatest benefit.” He elaborated: “I guess my vision would include a Director of Identity that reports to the [Chief Information Officer] or equivalent. She would be responsible for compliance, attestation requirements, establishing Identity policies, ownership of IdM solutions, backup and recovery solutions for identity-enabled applications, etc.” To which I’d also add access control.
Flynn went on to say, “A director of IdM … would need to find solutions that enable the business, facilitate ease-of-use and also maintain strict security guidelines. IdM solutions span the enterprise and the design, architecture and management thereof ought to be central.”
In your organization, especially if it’s a large one, is there a single person – or office – that’s responsible for everything touching identity throughout the entire enterprise? Does anyone coordinate policy on identity for employees, contractors, volunteers, clients, vendors, partners, etc.? If not, why not? If not – wouldn’t you operate more efficiently if there was one? More importantly, wouldn’t your organization be more secure with fewer cracks for the malefactors to slip through?
Would the director of Identity be part of the IT organization, or be on the business side? KuppingerCole’s white paper, “The Future of IT Organizations” will give you some help in making that decision.
I’m not saying you need a Chief Identity Officer, but perhaps a Director of Identity (or Identity Systems) within the office of the Chief Privacy Officer (or the CIO) might be the answer. Let me have your thoughts on this, either in the comments section, email to dk AT kuppingercole DOT com, or tweets to @dak3. If there’s enough interest, I’ll continue the conversation.
By the way, if you missed EIC last month, or missed my presentation on the Future of Authentication and Authorization, I’ll be webcasting an updated version in just a few weeks. It’s free but, of course, space is limited. Sign up now.
18.06.2013 by Dave Kearns
One of my favorite passages from Lewis Carroll is the dialog in “Through the Looking Glass” between Alice and Humpty Dumpty:
“There’s glory for you!”
“I don’t know what you mean by ‘glory,’ ” Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t — till I tell you. I meant ‘there’s a nice knock-down argument for you!’ ”
“But ‘glory’ doesn’t mean ‘a nice knock-down argument,’ ” Alice objected.
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master — that’s all.”
It often seems exactly like that when we have technical discussions.
I first became aware of this when the areas of Identity and Security began to seriously overlap late last century. Around 2005 a group, the “Identity Gang” coalesced around the idea of having informal discussions about identity issues either before, during, or after conferences (Catalyst, Digital ID World, EIC, etc.). What we all quickly discovered was that we didn’t agree on the meanings of terms. So we launched the Lexicon project. We didn’t get very far.
When I joined the Identity Ecosystem Steering Group (IdESG) last year, the same issue came up almost immediately – we didn’t all agree on the meaning of terms. A project was started to create a taxonomy for the IdESG which, incredibly, ended up containing 785 terms! Some had only one definition listed, but others had 2, 3, 4 – up to 13 different definitions. Needless to say that this is still an ongoing project with no end in sight.
I bring this up because of a Twitter conversation I was having yesterday. While Twitter isn’t ideal for dialog among more than 3 people (there were, at times, five involved in this discussion) it does have an immediacy that other methods (chat forums, email, et al) don’t. The drawback, of course, is the 140 character limitation per tweet, which leads to lots of abbreviations, elided letters and texting shorthand – none of which is helpful for understanding, especially among people who normally don’t converse with each other.
What happened was that someone referred to “PII” which I understand as Personally Identifiable Information. Others, though, consider it an abbreviation for Personal Identifying Information. There’s a subtle difference.
Personally Identifiable Information (I’ll call this PII1) is information, either a single attribute or a combination of attributes, which can uniquely identify an individual in a given context or namespace. Your date of birth does not uniquely identify you, but in combination with your mother’s maiden name and place of birth certainly can. Other attributes such as a national ID number (Social Security, National Health, etc.) are PII all by themselves.
Personal Identifying Information (PII2), on the other hand, is defined as “Information which can be used to distinguish or trace an individual’s identity, such as their name, social security number, biometric records, etc. alone, or when combined with other personal or identifying information which is linked or linkable to a specific individual, such as date and place of birth, mother’s maiden name, etc.” according to the IdESC taxonomy, quoting from The US government’s FICAM Trust Framework Provider Adoption Process ( that’s the US Federal Identity, Credential and Access Management Program).
The difference between the two is subtle, but significant. Under PII1, date of birth – in and of itself – is not PII. Under PII2 it is, even though it doesn’t uniquely identify you. So when creating privacy law or examining privacy issues (as are raised by the US Government’s PRISM program) it is very important to know which definition of PII is being used. Suppose, for example, it becomes illegal to knowingly distribute PII of others. Or, in the context of computer breach situations, it becomes necessary to inform entities when their PII has been leaked or stolen. How do we decide if it really is PII that’s escaped into he wild?
The definition I use, PII1, is entirely in keeping with the work KuppingerCole has done on Information Stewardship, where we differentiate between data and information – “Data is nothing more than the symbols which are processed by the computer. Data, in itself, has no meaning and no value. Information is data with context or processing that makes it useful.” Some attributes (such as date of birth) are simply data, of little use without context or other qualifying data that creates information.
We recognize that the terms we use are not always understood by everyone. In fact, we at KuppingerCole have the added problem of bi-lingual (in our writing) and multi-lingual (in our discussions) use of terms. “Digital Identity” (which Google tells me is “digitale Identität” in German) may have numerous translations each with multiple meanings.
For this reason, almost all KuppingerCole published works include a Glossary section, in which we define the terms used in the paper. This doesn’t mean that the definitions we use are universally accepted nor that other definitions might not be better. It simply means that when the term is used in the publication this is what you should understand it to mean. That way, any subsequent discussion starts off with everyone on the same page, so to speak.
This works well for publications, not so well for impromptu discussions. Maybe we should each create our own personal lexicon/taxonomy/glossary on the ‘net so we could reference it when we Tweet.
04.06.2013 by Dave Kearns
Whenever I talk about passwords these days, or rather the need to end the use of passwords, I really feel I should call myself Cassandra.
In Greek mythology, Cassandra was the daughter of King Priam and Queen Hecuba of Troy. Her beauty caused Apollo to grant her the gift of prophecy. When Cassandra refused Apollo’s attempted seduction, he placed a curse on her so that her predictions and those of all her descendants would not be believed. In the understanding of some modernists, Cassandra’s prophecies were flawed and incapable of coming true. But to the ancients (and in the sense I use it) the prophecies were accurate, amazingly so, but disbelieved because of flawed understanding by the listeners. My listeners frequently site the cost of replacing passwords, or the ease-of-use of passwords for users or even the (unfounded) insecurity of other authentication methods. All of that is true, but also irrelevant. The cost of a data breach, the even easier to use alternative authentication methods and the incorporation of context sensitive risk-based access controls (with the authentication step as simply one factor) all support my thesis.
Fortunately, I’m not a Jeremiah. This Biblical prophet (he’s credited with writing the Book of Jeremiah, 1 Kings, 2 Kings and the Book of Lamentations) was attacked by his own brothers, beaten and put into the stocks by a priest and false prophet, imprisoned by the king, threatened with death, and thrown into a cistern by Judah’s officials. These people didn’t like what he had to say! No one would speak up in Jeremiah’s defense, but a number of people, publications and organizations have taken up the cry to banish passwords as the sole method of authentication for user accounts.
One of the people is Google’s Tim Bray. He’s a developer evangelist for the Mountain View company, often speaking to developers, programmers and coders on best practices. Formerly the Director of Web Technologies at Sun Microsystems, Bray was also one of the main authors of the original XML specification. Speaking at the recent Glue Conference (the foremost independent gathering of developers), Tim was vehement in denouncing passwords. As noted by Ping Identity’s John Fontana:
“Near the beginning of his talk, Bray dropped to his knees, pounded the floor with his fists and sent out a plea to web sites asking that they not force him into creating another password. He implored developers to get on board with emerging identity protocols, namely OAuth 2 and OpenID Connect.”
Bray went on to talk about all of the non-password possibilities for authentication that can be built-in to apps. Fontana reports that Bray told the crowd: “If you go into the password business, you are peeing in the swimming pool,” that is, you’re being anti-social and deserve the scorn of your peers.
One publication that presented stark evidence of the vulnerability of passwords was Ars Technica. In an article called “Anatomy of a hack: How crackers ransack passwords like ‘qeadzcwrsfxv1331’,” they presented the story of three hackers – a developer of cracking software, a security consultant, and an anonymous cracker – given relatively standard PCs, readily available cracking software and 16,449 passwords converted into hashes using the MD5 cryptographic hash function. The person with the most underpowered computer took 20 hours and cracked 14,734 of the hashes, a 90-percent success rate. Another person unscrambled 13,486 hashes (82 percent) in a little more than one hour, using a slightly more powerful machine. The story even cites Ars Technica deputy editor Nate Anderson, never known as a cracker, who deciphered almost half of the passwords in just a couple of hours. Go read the whole article, it’s fascinating – and very scary if you still use passwords, but feel that by hashing and encrypting them that you and your users are safe.
The organization who’s taken a step away from passwords is Amazon.com, specifically Amazon Web Services (AWS), the cloud provider arm of the retailing giant. With a newly released API (didn’t we tell you that the API economy was the coming thing?), developers can use Facebook or Google credentials for authentication as well as the newly released Login with Amazon, a free service that lets third party apps and websites use the online retail giant’s system for authenticating users. If you are interested (and any AWS developer should be), Amazon has published an article including examples of use on it’s AWS web site.
Finally, The UK’s Daily Mail (in the Mail Online) revealed what’s coming on the authentication horizon. Motorola has shown not one, but two very different authentication schemes which may – or may not – ever see the light of day. The first is called the “Biostamp,” an electronic tattoo made of silicon and containing an electrical circuit, antennae and sensors that bend and move with the wearer’s body. The tattoos, designed by Massachusetts-based firm MC10, are intended for medical purposes to track a patient’s health, but Motorola thinks the technology can be used for authentication purposes, as an alternative to traditional passwords.
The second is even stranger – it’s called a ‘vitamin authentication pill,’ and when swallowed it’s powered by acid in the ingestor’s stomach and creates an 18-bit signal picked up by mobile phone. The Proteus Digital Health pill has already been approved by the U.S. Food and Drug Administration and was given European regulatory approval in 2010. It contains a computer chip that can be powered like a battery using the acid in the wearer’s stomach. Once swallowed the ‘vitamin authentication pill’ creates an 18-bit ECG-like signal inside the person’s body that can be picked up by mobile devices and authentication hardware outside the body which could be used verify the wearer is the correct owner of the device or account. It’s claimed that the pill could be taken every day for 30 days, if necessary, without any problems.
So there you have it – more evidence that passwords are bad, more experts exhorting developers to stop using them and more major hardware and software firms offering valid alternatives. Still using passwords? What’s your excuse?
21.05.2013 by Dave Kearns
Another European Identity (and Cloud) Conference has come and gone, and once again it was an exciting week with packed session rooms, and excellent attendance at the evening events. I’m not sure we can continue to call it the “European” Id Conference, though, as I met folks from Australia, New Zealand, Japan, South Africa and all over north and south America. And lots of Europeans, also, I should note. Nor were the attendees content to sit back and soak it all in. At least in the sessions I conducted there was a great deal of give and take between the audience and the speakers and panelists. Most good natured and looking for information but – occasionally – it got a bit raucous.
The track on authentication and authorization – so near and dear to my heart – drew a standing room only crowd who were eager to join in the discussion. As always when AuthN is discussed, passwords drew an inordinate amount of the discussion. I reminded the panelists and the audience that no less a personage than Bill Gates predicted the “death of passwords” back in 2004. And that even within Microsoft, passwords were still in use.
Too much energy is being spent of both trying to remove username/password from the authentication process and in trying to “strengthen” the passwords that are used. Neither approach is going to be effective. Passwords, or the “something you know” are far easier to use than “something you have” (security token) and far less scary than “something you are” (biometrics) for the general public to ever entertain the idea of switching.
Password strength is, essentially, a myth. Brute force attacks become quicker every day, so hacking the password directly becomes easier every day. Phishing attacks are getting so sophisticated that there’s no need to hack a password (and possibly set off security alarms) when you can induce the user to give it to you willingly.
Two factor authentication (2FA) had some champions, but most methods have already been shown to be vulnerable to either direct attacks (man in the middle style, or MIM) or the same phishing attacks that subvert “strong” passwords. The object of the phishing attack is, after all, for the user to login with their credentials which are then subsumed by the hacker. So go three factors if you want – it’s not much stronger.
I found widespread agreement (with a few diehard holdouts) for a context-collecting risk-based system for Access Control (which I’ve called RiskBAC). Knowing the who, what, when, where, how and why of the authentication ceremony leaves the username/password combo as only one of many factors (the who). In fact, entering a username and correct password isn’t the end of the authentication but merely the trigger to begin the Risk-based Access ceremony or transaction. The other factors are all gathered automatically through system dialogs after the entry of the password has identified the account to which the claimant wishes access.
Of course, once we’re satisfied that the claimant is most likely who he/she claims to be, we then take that information into account along with the other contextual elements to determine the degree of access we’ll authorize to the resource they’re seeking.
While the presentation was called “the Future of Authentication and Authorization,” I did remind the audience that over 2000 years ago the Romans used the same methods for access control. Biometrics (what you are) was represented by facial recognition, tokens (what you have) by scrolls sealed with the leader’s ring (early use of a security signature) and passwords were, well passwords – and often changed daily to guard against leaks of the information, something more of us should do today.
There was also a contextual element to the access control ceremony when the guard, on observing the claimant, was able to identify him in the context of where he knew the face from – the morning roll call, or the guardhouse. The sealed scroll had context based on what the guard knew about the location (at the camp or thousands of miles away) and condition (alive and kicking, or breathing his last) of the official who sealed the token.
There were lots of other exciting moments – even aha! Moments – in the tracks I did on Trust Frameworks and Privacy by Design as well as in others’ session especially those on Life Management Platforms, a coming technology that many who were hearing about it for the first time agreed will be game-changing when it arrives – and that may not be too far off. If you’d like to catch up, see the just released Advisory Note: “Life Management Platforms: Control and Privacy for Personal Data” (#70745).
And there was exciting, non-Identity related, news as well. We of course announced EIC 2014 for next May but – remember up at the top of this post I said that it was a larger than European conference? Well we also announced EIC 2014 London, EIC 2014 Toronto and EIC 2014 Singapore. EIC is going worldwide, and the people involved in identity couldn’t be happier. Dates for the new venues haven’t been finalized yet, but I’ll be sure to tell you about them when they are.
07.05.2013 by Dave Kearns
For years we’ve spoken about the 4 “A”s of identity & security – Administration, Authentication, Authorization, and Audit, but maybe it’s time to drop an “A”. Maybe it’s time to speak of “Access Control” which encompasses Authentication (sometimes referred to as “AuthN”) and Authorization (referred to as “authZ”).
In many instances authorization is binary and tied directly to authentication – if a person is authenticated, then they get access to a resource. The authorization is tied only to the authenticated entity. Consider building security, for example – swipe your proximity card and you’re allowed in. Or, in rather more ancient practice, unlock the door with your key and get access. In the former case, the use of the proximity card (the “token”) is probably recorded someplace, so there is at least a rudimentary audit trail. When the key is the “token”, then there is no trail.
Until recently, the same was true concerning access to digital resources – if you authenticated to the system (network, server, application, etc.) then you got access as defined for the username you are using – most typically to a group of resources.
Note that there’s no actual proof that the person being authenticated is the same person for whom that particular account was created. The standard username/password combination that comprises the vast majority of authentication transactions today gives absolutely no assurance that the “proper” user (whatever that means) is the one being granted access. For example, I do password protect my computer (it’s a laptop that travels with me). But my wife knows the password, and has had to use it on rare occasions when I’m not available, but information is needed. The computer has no idea that it’s her and not me who is accessing those resources. Tokens do not improve this situation and biometrics provide only slightly more proof since, in practice, it isn’t the biometric (a picture of your fingerprint, for example) but a key or token created with the parameters of the biometric.
I could, of course, set up a separate account for her so that she could authenticate as herself. But for the purposes she might need to access the PC, she would need at least the exact same authorizations that I have. Creating that second account, though, reduces the security of the system. With two accounts, the risk that a breach could occur is actually doubled – the risk of my account being compromised PLUS the risk that my wife’s account could be.
The usual method of controlling authorizations for a single user is to have multiple authentications for that user, multiple identities if you will. On my Windows system, I need to sometimes authenticate as the Administrative user when I need to access system resources, install/remove software, etc. Most of the time, I authenticate as a User with a more limited set of authorizations. The same is true of ‘nix systems, where the root account is used sparingly, and only when needed. Even within applications, a similar system is observed – most of the time, I would authenticate to a database as a user, but occasionally I need to be the database administrator (DBA) in order to, well, do administrative stuff. Again, in reality, most people don’t do this – although they should – choosing the “ease of use” that authenticating as the more powerful user brings.
The bottom line is that the important thing is the authentication. Get that right (which usually means enter the correct password) and the authorizations flow: it’s all or nothing, black or white, good or bad. But with data breaches, especially the theft of usernames and passwords, seemingly coming more frequently as each day goes by (and you’d think organizations would have learned by now, wouldn’t you?) we need to do something different.
For a dozen years or so what the “thing we need to do” has been identified as is to replace the username/password combination with something “stronger”. But we’ve learned from study after study that there really isn’t anything strong enough – tokens, biometrics, “hardened” passwords are all flawed. While stealing a biometric is tougher than guessing a password, it’s a whole lot more difficult to replace a fingerprint than it is to change passwords.
As I’ve said for many years, and as I hope to re-iterate strongly at the upcoming European Identity & Cloud Conference (EIC), context, as part of a well thought out risk-based access management system, is what we need. Some use the phrase “adaptive authentication” to mean, in essence, a dynamic authentication which may require one, two or more factors depending on the circumstances. Still, this is really just one part of risk-based access control. It’s unfortunate that RBAC has come to mean Role-based AC, so we’ll need to come up with a different term – perhaps Risk Managed Access Control (RMAC).
The authentication continues as we’ve always done it – username/password, token, biometric, what-have-you, singly or in combination – but we collect context data (location, platform, date and time, and so on) and evaluate it giving it a risk metric. Alternatively we could use the inverse and call this a “trust metric” – the amount of trust we have in the validity of the identity of the person attempting the authentication. Based on that metric, we grant authorization on a sliding scale, which can be as fine-grained as your rules engine will allow.
We aren’t there yet, but we need to be. The presentations at this month’s EIC can bring us closer. You really should be there.
23.04.2013 by Dave Kearns
Larry Ponemon, of the Ponemon Institute, is well known for excellent surveys about technology issues. And Larry didn’t disappoint when he recently released “Moving Beyond Passwords: Consumer Attitudes on Online Authentication, A Study of US, UK and German Consumers” (warning: pdf file).
In summary, the report of the survey concludes:
- “The majority of consumers would use a multi-purpose identity credential to verify who they are before providing secure access to data, systems and physical locations.
- Banking institutions are considered the best for online validation and strong authentication and identity verification. Consumers in all countries believe banks would be the best to issue and manage a multi-purpose identity credential.
- The benefits of a multi-purpose identity credential are convenience (US & UK consumers) and security (German consumers). Identification and authentication when traveling, accessing the Internet and using social networks are the most popular reasons to have single ID.
- There is no clear consensus on what devices would be preferred to manage their multipurpose identity credential. However, in the US more consumers would prefer their mobile devices for identification purposes. In the UK, it is RFID chips. German consumers seem to favor biometrics.
- If consumers trust the organization, biometrics is acceptable to use for authentication.
- Voice recognition and facial scan are the most acceptable types of biometric authentication. Least acceptable in the US and UK is an iris scan. In Germany, least favored are fingerprints.
- Authentication is important when sharing devices with other users. The majority of consumers believe it is important to have authentication that securely verifies their identity on devices that are shared with other (multiple) users.”
So what we’re seeing here is that users favor stronger authentication, but also easier to use authentication (thus the preferences for mobile devices, RFID and biometrics as opposed to passwords). There’s also a strong feeling that the identity provider be trustworthy, or be seen as trustworthy: “Industries and organizations considered by consumers in all three countries as most trustworthy to safely issue and manage a multi-purpose identity credential are: banking institutions, credit card and Internet payment providers, telephone, wireless or cable services companies, healthcare providers and postal and delivery services. Least trusted are educational institutions, Internet service providers and retailers. “
The bottom line appears to be that users are looking for ease-of-use coupled with security and trust and these are exactly the issues we will be exploring next month at the European Identity & Cloud Conference (EIC). In particular, I’ll be moderating a track on Authentication & Authorization featuring a detailed look at “Versatile Authentication, Risk- and Context-Based Authentication: Why you need these Concepts”. Risk-based Access Control using context is a subject near and dear to my heart. It appears to be what the consumers in Ponemon’s survey are groping towards, without being able to articulate exactly what they want. It’s also something that seems to be gaining more traction in the marketplace, at least if I can judge by what I’m reading lately.
Chris Zannetos, CEO of Courion, recently wrote a blog post called “Context is everything”. In this look at what he calls “security intelligence,” Zannetos says:
“The activity and traffic monitors such as SIEM and deep packet inspection products have been looking at streams of information flows without the context to make sense of them. This is a bit like analyzing a baseball game by looking only at the types of pitches and result (hit, walk, out) — without understanding who is pitching, who is up to bat, what their past patterns have been, the ballpark, or the weather. In other words, the ‘Moneyball’ factor has been missing.”
< for my non-North American readers, substitute “football” (or “futbol”) for “baseball”>
And, of course, context is about more than a single packet – it’s the Who, What, When, Where, Why, and How of a transaction. Chris even alludes to a deeper context – the history of the context of similar transactions, which should be included in the analysis much like a Bayesian spam filter is used with email.
The second piece I read about context was from Jeff Rosenberg, a technical instructor in the Client Services group at Ping Identity. He didn’t use the word “context” in his blog entry called “Identity as a Rental (IDaaR),” but he did describe context-based authentication when he wrote:
“Did the user authenticate via password, certificate or one-time code? Is this user within the corporate network or coming in externally? Which training level or security clearance is required? Perhaps attribute-level permission is involved, such as LDAP group membership. When these questions are satisfied, the user checks out and the service is provided.”
Rosenberg then goes on to talk about the short-term use of particular attributes which are appropriate for the context of a given transaction, but that’s more appropriate for KuppingerCole’s discussions of Life Management Platforms, another subject that will be well covered at EIC next month.
Context, as a contributor to Risk-based Access Control, as collected for SIEM and for packaging identity attributes for short-term use is definitely a winner. And it is readily – and easily – available to most of you who use some form of SAML-based authentication/authorization system. You might wish to read (if you’ve nothing else to do right now) “Authentication Context for the OASIS Security Assertion Markup Language (SAML) V2.0” (another PDF file), all 70 pages of it.
But for today, the introduction should be sufficient: “If a relying party is to rely on the authentication of a principal by an authentication authority, the relying party may require information additional to the assertion itself in order to assess the level of confidence they can place in that assertion. This specification defines an XML Schema for the creation of Authentication Context declarations – XML documents that allow the authentication authority to provide to the relying party this additional information. Additionally, this specification defines a number of Authentication Context classes; categories into which many Authentication Context declarations will fall, thereby simplifying their interpretation.” In other words, this is a way to provide context to the transaction. Once you take context into account, then allowing a simple, easy-to-use factor (password, fingerprint, hardware token, etc.) is no longer a problem. Guessing someone’s password doesn’t get you the context in which it’s used and thus raises the risk factor for that transaction.
We have the tools, all we need is the effort to provide more secure, yet easy-to-use authentication ceremonies. What’s stopping us? Let’s talk about that at EIC next month in Munich.