03.12.2013 by Dave Kearns
In my last post (“Dogged Determination”) I briefly mentioned the FIDO alliance (Fast Identity Online) with the promise to take a closer look at the emerging internet password-replacing-authentication system this time. So I will.
But first, an aside. It’s quite possible that the alliance chose the acronym “FIDO” first, then found words to fit the letters. Fido, at least in the US, is a generic name for a dog which came into general use in the mid 19th century when President Abraham Lincoln named his favorite dog Fido. Choosing a word associated with dogs harkens back to the internet meme “On the internet nobody knows you’re a dog”. With the FIDO system, no one except those you intended would know who you are. That’s my theory and I’m sticking to it.
FIDO was in the news last week when it was announced that Fingerprint Cards (FPC) and Nok Nok Labs had announced an infrastructure solution for strong and simple online authentication using fingerprint sensors on smartphones and tablets. The two companies have initially implemented the joint solution utilizing Nok Nok Labs’ client and server technology and commercially available Android smartphones using the FPC1080 fingerprint sensor in order to demonstrate readiness to support the emerging FIDO-based ecosystem.
That should give you an idea of the thrust of the Alliance.
The FIDO system doesn’t require a biometric component, but it appears to be highly recommended. From the Alliance’s literature:
“The FIDO protocols use standard public key cryptography techniques to provide stronger authentication. During registration with an online service, the user’s client device creates a new key pair. It retains the private key and registers the public key with the online service. Authentication is done by the client device proving possession of the private key to the service by signing a challenge. The client’s private keys can be used only after they are unlocked locally on the device by the user. The local unlock is accomplished by a user–friendly and secure action such as swiping a finger, entering a PIN, speaking into a microphone, inserting a second–factor device or pressing a button.
The FIDO protocols are designed from the ground up to protect user privacy. The protocols do not provide information that can be used by different online services to collaborate and track a user across the services. Biometric information, if used, never leaves the user’s device.”
FIDO is, first and foremost, about strong authentication. Two-factor authentication is a requirement. A biometric component (fingerprint, voiceprint, etc.) is highly recommended.
President of the Alliance is Michael Barrett, formerly CISO for PayPal, formerly president of the Liberty Alliance and before that VP, Security & Privacy Strategy for American Express. Interestingly, the VP of FIDO is Brett McDowell, currently Head of Ecosystem Security at PayPal, who was previously Executive Director of the Liberty Alliance and its successor, the Kantara Initiative. He also served as Management Council chair of the USA’s NSTIC (National Strategy for Trusted Identities in Cyberspace) Identity Ecosystem Steering Group. These are two guys who know identity systems inside out.
PayPal (which is always looking for stronger authentication methods) and Nok Nok Labs (which is always looking for better ways to use biometrics as well as strong authentication) were two of the founders of the alliance which has now grown to over 50 members including such big names as Google, Blackberry, Lenovo, MasterCard and Yubico as well as just about everyone in the biometric device space.
It’s a good cast of characters, but is that enough?
The impact of so many biometric friendly members means that the Alliance has to first answer (again) all the questions about the “problems” with biometric authentication. Now, if you know me at all you know that “I ♥ Biometrics” but getting others to like them is an uphill battle. In fact, the continuous (I’ve been involved in it for 15 years!) argument about the security of passwords is really a side issue for the FIDO Alliance. More important, I think, is its reliance on the Online Secure Transaction Protocol (OSTP).
OSTP is a protocol designed and issued by FIDO (they say they will turn it over to a public standards body once it is fully “baked”). It’s explained in a white paper (“The Evolution of Authentication,” this is a PDF file) where it’s generally referred to as the “FIDO protocol”. The heart of the system is the FIDO authenticator which the white paper explains:
“The FIDO Authenticator is a concept. It might be implemented as a software compo-nent running on the FIDO User Device, it might be implemented as a dedicated hard-ware token (e.g. smart card or USB crypto device), it might be implemented as soft-ware leveraging cryptographic capabilities of TPMs or Secure Elements or it might even be implemented as software running inside a Trusted Execution Environment.
The User Authentication method could leverage any hardware support available on the FIDO User Device, e.g. Microphones (Speaker Recognition), Cameras (Face Recognition), Fingerprint Sensors, or behavioral biometrics, see (M. S. Obaidat) (BehavioSec, 2009).”
As I said, biometrics strongly recommended.
Read the paper for more details of how it works.
Can the FIDO proposal succeed? Yes, it’s a well thought-out system that does provide strong authentication with a high degree of confidence that the user is who they claim to be.
Will the FIDO proposal succeed? That’s much more problematic. It requires that relying parties and Identity Providers (which can be the same entity) install specific server software and that users install specific client software. The client part could be an easier “sell” if it comes along with the biometric devices and services that FIDO members provide. Easier, certainly, in a smartphone environment, less so in a desktop/browser environment. History says that anything requiring users voluntarily install something or requiring relying parties to buy, install and maintain single purpose services is a long shot. And the FIDO solution requires both. Still, if the members of the FIDO alliance provide the software and compel their clients to install it a tipping point could be reached. If so, I’d applaud it.
I will note that a number of my colleagues believe I’m reading too much into the so-called “biometric requirements” of FIDO, noting that hardware tokens (represented by Yubico and other members) are an even easier implementation since most modern smartphones can handle a microSD card, which could act as a hardware token – or, at least, turn the phone into a hardware token. It would be protected by a PIN, which users are familiar with entering for all sorts of services.
While I do agree with all that, the typical PIN is 4 digits so there are 10,000 possible combinations (0000 to 9999). That’s not strong enough for my taste. Brute force manual entry could try all possibilities within a few minutes, and – since some combinations (1234, 1111, 1379, 1397, etc.) are more popular than others it could be only a few seconds before the code is broken. Nevertheless, if this would increase the uptake in using the FIDO system, I’d be behind it – at least as a good beginning.
19.11.2013 by Dave Kearns
Some colleagues and I got into a short discussion about the FIDO alliance last week. That’s the Fast Identity Online Alliance, which was formed in July 2012 with the aim of addressing the lack of interoperability among strong authentication devices. They also wish to do something about the problems users face with creating and remembering multiple usernames and passwords. According to their web site, “the FIDO Alliance plans to change the nature of authentication by developing specifications that define an open, scalable, interoperable set of mechanisms that supplant reliance on passwords to securely authenticate users of online services. This new standard for security devices and browser plugins will allow any website or cloud application to interface with a broad variety of existing and future FIDO-enabled devices that the user has for online security.”
There’ll be more from us in the coming weeks and months about FIDO and other emerging protocols, but our discussion got me thinking about something else.
There are now more than 50 members of the alliance, and someone mentioned that Google was an early joiner – not a founding member, but getting on board soon after. The question was Why Google would do this and what that portended for FIDO’s future.
Obviously it’s good to have a big name like Google on board. Along with other well-known names (Lenovo, Blackberry, MasterCard, Yubico and more) the Google brand makes for a bigger “buzz” and makes getting press (and, let’s face it, analyst) coverage easier. There’s no question that having Google on board is a major plus for FIDO. But is FIDO a plus for Google?
Google is known for supporting many authentication and identity protocols and services. They were on board early with OpenID, OpenID Connect, SCIM (System for Cross-domain Identity Management), Oauth and Oauth2, SAML and more. In fact, there are few internet friendly identity-related mechanisms that Google doesn’t support. But why?
One clue is the person responsible for all this activity. Eric Sachs is Google’s Group Product Manager for Identity. On his Google+ site he identifies his goal rather succinctly: Eliminate Internet passwords. He’s very serious about that.
One first step was the implementation of two-factor authentication (2FA) by the search giant last year. Rumor is that in the not too distant future two-factor AuthN will be required by the company. FIDO, of course, has two-factor authentication as one of the ways they wish to strengthen on-line account access.
Why are Google doing this? 2FA is not as easy to use as a simple username/password. It is, though, more secure. And Google wants you to think that their services are more secure than their competitions’. If you feel your account (and your content) is more secure with Google Docs, Google Drive, YouTube, Google+, etc. then you are more likely to use those services rather than another. Since Google’s business is based on advertising, the more “eyes” it gets on its services the more ads it sells and for more dollars per ad.
The second part of the equation is that by supporting strong authentication Google is more assured that you are who you say you are. You may remember the brouhaha that ensued when Google required so called “real names” for Google+ back in 2011 (the aptly named “Nymwars”). Google’s advertising revenue is enhanced when it can deliver ads to you that resonate with your tastes, your wishes and your lifestyle – unlike Walmart, whose marketing gaffe (selling smoked hams to Jews) went viral. By making sure it’s really you, and by tying together everything you do on Google properties, the company ensures that it has the best targeted marketing in cyberspace.
Targeted advertising is another can of worms, which I won’t get into here. But, if that’s what keeps Google working for stronger authentication and password elimination, then I’m OK with that.
Martin Kuppinger suggested another possible reason: Google not only wants to eliminate username/password (which makes sense), they want to end up with strong authentication that does not require massive cost and logistics (deploying hard tokens etc.) for them. This is just not feasible for Google. And it is a problem all larger organizations are facing, while smaller organizations can handle it. Thus Google is looking for standards that allow them to reuse existing strong authentication in a flexible way. Makes sense to me.
To get back to the beginning of this piece, it appears that Google supports any protocol or system that shows any promise of bringing stronger authentication. So the question remains, is FIDO the right vehicle to do that? I’ll look at that question next time. For now, take a look at what Martin had to say last spring in “The FIDO Alliance – game changer for Internet Security?”
05.11.2013 by Dave Kearns
Some time ago, in the wake of Wired journalist Mat Honan’s story of his account compromise (“How Apple and Amazon Security Flaws Led to My Epic Hacking”), I wrote about BYOI – Bring Your Own Identity – and how “In the enterprise, there’s even less reason to support today’s BYOI.” Some time before that, my colleague Martin Kuppinger had also addressed this issue (“Bring Your Own Identity? Yes. And No”), dismissing the BYOI idea as simply a small piece of a much larger system.
But I think we need to re-address this issue.
First, the term “BYOI” as it’s commonly used is misleading. It’s not your “Identity” you bring with you (everyone brings their own identity wherever they go), but a third party authentication that you bring to the table such as Facebook, Apple, Google, Amazon, etc. ( generally referred to as “social” logins) as well as other third party systems (government eID, healthcare identities, etc.). So let’s be sure to keep that context in mind.
Likewise, from the Enterprise’s point of view, there’s the question of who is bringing this third party AuthN to the table: an employee, contractor, vendor, client, partner, customer – or a visitor who might potentially fill one of these roles. Each of these roles has different requirements for authentication, each will look for different authorization, so each will be scrutinized differently by whatever passes as our Risk-Based Access Control (Risk-BAC) system. And make no mistake about it, we all (that is, our organizations) have a Risk-BAC system. It might be highly sophisticated, automated and dynamic or be a simple, static, implemented-by-hand system based on little more than a username/password combination for access (just because it’s high risk doesn’t remove the Risk-based facet of the system).
For visitors who may or may not be potential vendors, clients or employees the use of a social login is probably sufficient. We want these people to be able to access the resources they need with a minimum of fuss, but with a certain amount of information collected (name, email, physical address, age, and possibly other details). Asking the person to fill out a long form just to be able to view job openings or download marketing materials is going to turn off some otherwise desirable potential employees or customers. Fortunately, we can use the API (also called the “graph matrix”) made available by the social login provider to gather this information by simply asking the person for their approval. So, yes, BYOI works in this case, and works better than creating our own authentication system for this class of users. However, a difficulty could arise when this initial contact is then extended to a full-scale client (or vendor or employee) account – how do we tie together the initial information collected with this new higher value account?
For existing vendors, partners, suppliers and others, whose organization has a current relationship with our organization, the best result would come through federated login. That is, the person would login to their own organization’s system and we would accept that the person is an authorized representative of that organization. We really don’t need any other information about them. We’ve previously negotiated the authorizations that the user would have, which could be adjusted based on the information sent along with the federation credentials. For example, a large supplier might have multiple people needing access to our inventory of various items and would send along qualifying information so that our system would give the correct authorization for that inventory. All the user account maintenance is done by our federation partner, so there’s a sense that the data quality is better than if we tried to maintain it in our system. Using a social BYOI login could be disastrous as we’d have no way of knowing if the person was still employed by the partner.
Then there are the cases of employees and contractors. First we’ll divide contractors into two groups – independent, individual contractors and those working for (or with) a contracted agency.
Those people under contract to a third party agency, who do work for us under the control of the agency, are probably best handled as with other partners – by federation. The situation is a bit different as we’d probably need to adjust authorizations for each individual depending on the work they were doing but it’s also probably best that we let the contracting agency handle the initial authorization especially as it’s that agency’s HR department who would hold all of the individual’s relevant identity data. Of course, should that contractor become a regular employee we would also be faced with converting any data collected about that person from the federated system into identity data within our enterprise system.
For those contractors who are directly contracted by our organization – and not by way of a third party agency – we should use the same controls we would for employees. Generally, the only difference is in the tax status of the employee or the legal status (e.g., “employment at will” statutes), which from our perspective (Identity and Security) are irrelevant.
These are the ones that need to have individual accounts within our enterprise. These are the people who we need to login directly using the credentials we provide. These are the ones we need to scrutinize most, using multi-factor authorization when our Risk-Based systems suggest that we do. These are also the individuals who should be subjected to the most rigorous identity validation when they are first enrolled in our system, something the HR department should handle. By no stretch of the imagination should we ever consider using a social “BYOI” login for this people.
BYOI – especially the case for social logins – by its very nature has a low level of assurance for identity when compared to enterprise controlled systems (all else being equal). It’s useful for low value transactions but – at least as it’s constituted today – should give security personnel nightmares if ever used for access to the organization’s valuable resources.
So to the question “does BYOI have a place in the enterprise?” we can answer with a qualified yes, but also a qualified no. The Information Risk & Security Summit 2013, coming up in Frankfurt Nov 27-28 will go into BYOI on much more detail. You should register now to attend.
22.10.2013 by Dave Kearns
Lately I’ve been writing a lot about how you should be improving your authentication and authorization. I’ve been haranguing you to get Risk-Based Access Control (RiskBAC) sooner rather than later. There has been some push back.
It’s a truism in IT and security that “technology is easy, it’s the people that are hard.” Well, when moving to RiskBAC, the technology – or its implementation – isn’t easy. So if the technology isn‘t easy what does that say about the people? Darn near impossible, that’s what.
There are, also, two different people problems. The first is users. While we’d like users to understand and embrace any new technology we implement, the truth is that they hate change no matter how much better off they’ll be, no matter how hard we try to sell it. In the end, they’ll go along because they have to, not because they want to. Education is key, but bribery can also be used as this tale about a self-service password reset implementation shows.
Beyond the users, though, comes the real people problem – getting signoff on the budget for the project from the key players in the organization. Everyone needs someone else’s signoff, everyone. Even the CEO needs the board to sign off. Over the 20 years I’ve been doing this I’ve tried to present strategies to get that buy in. One of my favorites revolved around technologies for regulatory compliance. Because some regulations included criminal penalties for non-compliance, the argument I suggested was to present the technology you wanted (or needed) as the best way for the powers-that-be to stay out of jail.
While effective, for the most part, that argument didn’t win many friends in the boardroom.
More recently, when talking about Information Stewardship (see our Advisory Note: From Data Leakage Prevention (DLP) to Information Stewardship), I urged that you compile a clippings folder of the consequences of information leaks in terms of publicity and monetary losses (or even entire corporate losses as in the case of DigiNotar). Again, not something that gets you invited to the weekend outing at the board chairman’s hideaway.
So what can you do?
If you’d thought ahead you would have taken some business courses in college, maybe held out for an MBA or at least taken a course in negotiation strategy. Failing that, let me recommend a couple of books.
Getting to Yes, (1991 edition by Roger Fisher, William L. Ury and Bruce Patton) is subtitled “Negotiating Agreement Without Giving In”. There’s a sort of companion volume, called Getting Past NO (by Ury alone) and subtitled “Negotiating in Difficult Situations” was released in 1991 along with the second edition of Getting to Yes. Both have been perennial listings on the Business Week best-sellers list.
Both involve a negotiating strategy originally developed at Harvard University where Fisher and Ury were members of the Harvard Negotiation Project. Their work on the psychology of negotiation led to the postulating of a method called “principled negotiation,” which they expounded on in the 1981 first edition of Getting To Yes.
Principled negotiation involves five propositions:
- Separate the people from the problem.
- Focus on interests, not positions.
- Invent options for mutual gain.
- Insist on using objective criteria.
- Know your BATNA (Best Alternative To Negotiated Agreement)
In other words, make friends, not enemies.
Ten years later, when the second edition was issued, Ury realized that no matter how well you followed their advice that there would always be those who simply didn’t listen and kept saying “no”. That’s where the second book comes in.
Getting Past NO looks at what comes next in the negotiation process, and specifically talks about:
The book explains in details how to:
- Have the joint problem-solving mentality together
- Break the 5 barriers to cooperation: your reaction, their emotion, their position, their dissatisfaction, their power.
- Prepare, prepare, prepare yourself by identifying/developing:
- Interests of each side
- BATNA – Best Alternative To a Negotiated Agreement
- What do you aspire to? What would you be content with? What could you live with?
It is Ury’s underlying thesis that a good negotiation is achieved by 2 negotiators meeting the needs of both – never one more skilled that overpowers the deal. Because if one negotiator overwhelms the other the deal itself is weakened as the loser might not recognize his involvement and his interests in the deal and therefore have no stake in seeing its success.
What’s really needed, of course, are great marketing skills. You need to convince the others in the negotiation that what you’re offering is what they want (not necessarily what they need). Of course, they may not know that what they want is Risk-Based Access Control, but by identifying what it is that they want in Information Security – even if that is only the wish to keep the company’s name out of the newspaper – then you can better form your “pitch” for the technology you know that they need. For a closer look at our thoughts on this see Martin Kuppinger’s keynote from last spring’s European Identity Conference.
Get the books, learn the methods, Get To YES.
08.10.2013 by Dave Kearns
Apple’s new iPhone (the 5S model) is equipped with the Touch ID fingerprint reader. Its release just a couple of weeks ago has generated more discussion (and bloviating) about biometrics, fingerprints in particular, than all other fingerprint systems together. Not only that, but it’s forcing me to do something I’ve rarely – if ever – done before: say something nice about Apple.
In the twenty years I’ve been writing and opining about technology I’ve occasionally ranted about Apple, its products, its management and its fans, but for the first time, today, I can say bravo Apple.
Bravo Apple for introducing millions of users to the benefits of biometrics.
There’s been a lot of urban mythology about Touch ID being spread already, so here’s a look at some of the more outrageous claims, and why they are outrageous.
- The NSA will now have your fingerprints (variations include the CIA, the local police, GCHQ, the DGSE, and the Chinese secret police)
- Criminals can get your fingerprints and plant them at crime scenes (variation: so can the police)
The iPhone neither sends your fingerprint data to Apple, the Cloud or anywhere else (it’s only stored locally on the phone), nor is there a picture of your fingerprint stored anywhere. Like all portable devices (smart phones, “dumb” phones, tablets, etc.) there are probably latent fingerprints all over the screen and the cover. Anyone in possession of your phone could “lift” these prints for whatever purpose, criminal or benign. But that has nothing to do with the Touch ID function.
- The iPhone reader is easily hacked; it even works with your cat’s paw!
Germany’s Chaos Computer Club claims to have “…successfully bypassed the biometric security of Apple’s Touch ID using easy everyday means.” Everyday means? Here’s how they describe the process:
“First, the residual fingerprint from the phone is either photographed or scanned with a flatbed scanner at 2400 dpi. Then the image is converted to black & white, inverted and mirrored. This image is then printed onto transparent sheet at 1200 dpi. To create the mold, the mask is then used to expose the fingerprint structure on photo-sensitive PCB material. The PCB material is then developed, etched and cleaned. After this process, the mold is ready. A thin coat of graphite spray is applied to ensure an improved capacitive response. This also makes it easier to remove the fake fingerprint. Finally a thin film of white wood glue is smeared into the mold. After the glue cures the new fake fingerprint is ready for use.”
What? You don’t keep “photo-sensitive PCB material” around the house?
As to the “cat’s paw” statement, one user found that he could enroll the cat by swiping its paw on the Touch ID button. You could also enroll just about any ridged surface. And that same ridged surface would be validated and unlock the device when applied subsequently. It’s not that your cat’s paw could be mistaken for your finger!
- Fingerprint matching is notoriously unreliable
Fingerprint matching is not 100% reliable. The false positives and false negatives, though, are generally proportional to the number of fingerprints being matched to. Take one fingerprint and try to get a match against the millions in the FBI database and there’s a small, but significant, chance of a false match. But take one fingerprint and try to get a match against the one dataset stored on your iPhone and you’ll find it’s almost always accurate. An occasional mis-read because you swiped your finger poorly is hardly the fault of the hardware and software.
- Fingerprints are a poor replacement for passwords. What happens if they’re compromised? You can’t change your fingerprint
This is a fallacious argument because in addition to having ten fingers (which can be used in multiple combinations, millions of them in fact) the algorithm used to transform your fingerprint data into a private key could also be changed rendering millions more possibilities. But I will partially agree with this. Biometrics shouldn’t be used to replace passwords. Instead, they should be used to replace usernames.
Usernames need to be unique. While there’s no specific proof that fingerprints (or iris scans or heart waves) are unique, it’s easy enough in any particular namespace to ensure that the biometric is unique at the time of enrollment by comparison to those already enrolled (just as Gmail, for example, tells you that “dkearns is already in use. Would you like to be dkearns1043?”). Also, in the 150 years or so that we’ve been using fingerprints for identification there’s never been a proven instance of two sets being alike. I think we can safely say there’s a Theory of Unique Fingerprints, at least until someone can refute the theory.
In the past I’ve suggested that an email address (which does have to be unique throughout the internet) makes an ideal username. But it’s ridiculously easy to know (or guess) someone’s email address. Standard usernames (first initial and seven letters of last name, for example) are frequently the part of the email address to the left of the “@,” so again, easily guessed. But you can’t guess a fingerprint. You can’t discover it through social engineering, either. It can be collected, but despite what the Chaos Computer Club says, that’s not particularly easy for the average person (e.g., one who could guess your username or email).
For unlocking your iPhone, a swipe of your finger is “secure enough” for most situations. But if Apple wants to also use the finger swipe to authorize access to apps (or “outside the phone” data), then coupling the swipe with a PIN, password or pass phrase makes a lot of sense. It’s not absolutely secure, but it is strong enough for the average user, their average device and the average data they need to access. Even if someone has stolen (or hacked) your password, they still need to re-create your fingerprint. Not at all easy, as we’ve seen. Fear not, swipe away!
24.09.2013 by Dave Kearns
PII, Personally Identifiable Information (also phrased as Personal Identity Information) is at the heart of identity security and privacy. Yet, like almost all terms in the Identity sphere, it suffers from multiple overlapping definitions leading to misunderstandings, heated discussions and a distinct lack of clarity.
Major sources of these problems are codified definitions from national laws, standards bodies and government agencies (such as the US National Institute for Standards and Technology).
There’s not much I can do about government definitions, no matter how wrong they are. But I can explain my reasoning for the way in which I discuss PII and which I believe to be a good basis for sound decision making. Too much decision making, unfortunately, is done in the heat of the moment without proper forethought or review. This is especially true – and especially unfortunate – when government entities get involved.
So let’s break down PII into its component parts. First, there’s Information.
Information is data, but not all data is information. As Mike Small and I wrote in From Data Leakage Prevention (DLP) to Information Stewardship: “Data is nothing more than the symbols which are processed by the computer. Data, in itself, has no meaning and no value. Information is data with context or processing that makes it useful…” A Social Security number, a date of birth, a postal code, a given name – each of these is a bit of data but none – on their own – is information. None, on their own, have context.
Take date of birth. Month and day of birth can limit the possible identities to approximately 1/365th of the world’s population (given an even distribution) – a number approaching 20 million! You can limit this by adding the birth year, but that still would leave over 175,000 people in the “pool” – hardly a good identifier of an individual!
Given name, of course, is easy to see as a non-starter in the identity sweepstakes. How many “Dave”s do you know of? Even those celebrities known by a single name (Beyoncé, Christo, Oscar, Madonna, etc.) are far from unique. Google “Christo” and see how many show up. If I refer to “Beyoncé”, especially in an entertainment context, then most people will infer that it is Beyoncé Knowles, the singer, that I’m talking about. And, in fact, should I wish to talk about a different Beyoncé I would need to include additional data in order for the reference to be considered actual information, but this is the exception not the rule.
For some (and this is the source of much discussion in identity circles) that inference is enough to make that bit of data qualify as PII. For example, if I’m talking to someone at the European Identity Conference and a third person hears me refer to something that “Martin” said – with no further qualification of “Martin” – the inference will most likely be made that I’m speaking about Martin Kuppinger. But suppose I had been talking about my favorite movie lines and wanted to quote Martin Short’s character (Ned Nederlander) in “Three Amigos”. I might say “Then Martin said: ‘Wherever there is injustice, you will find us. Wherever there is suffering, we’ll be there!’“ A stretch, maybe, but possible.
So without context, data cannot be considered information. And if it’s not information then it certainly can’t be “identifiable” information. Can there be information (in the sense of identity information) that is not identifiable information? Yes, I believe there can be.
I was an early subscriber to Google’s Gmail service, and was able to grab the address dkearns AT gmail DOT com. There are now many people who have the address dkearnsX AT gmail DOT com where “X” is a one, two, or three digit number (which allows you to infer just how many “dkearns” there are in the world). I know this, because not a week goes by that I don’t receive email intended for one of these other “dkearns” addresses but from which the trailing number has been stripped (either by the user, the sending party, an email server, etc.). I know this because they often begin with a salutation: “Dear Dennis” or “Dear Dierdre” or “Dear Danny”. Sometimes these notes come from friends or family of the intended recipient. Most often they’re from a store or service that the person has signed up for. In none of these cases, though, can I – as the owner of the in-box – be identified as the person the note was intended for. It’s information, but not identifiable information.
The danger there is that the other dkearns will say or do something to bring them to the attention of the security services (maybe Danny Kearns is a well known member of the IRA). The content of an email coupled with the known non-American status of Danny Kearns could place my inbox on the NSA’s watch list. In effect we now have Personally mis-Identifiable Information!
Your identity is made up of a very large, almost uncountable, number of attribute-value pairs (e.g., SURNAME = Dave). Each value is another bit of data. Few, if any, on their own can identify you. If you can collect a number of this data points within a given context, then you might have information. If that information is unique within that context – or “namespace” as we call it – then you have identifiable information. If, then, you can tie that identifiable information to a particular person you would have Personally Identifiable Information.
PII is important. PII needs to be protected. That doesn’t mean every bit of data is equally important nor that every bit of data needs a Herculean effort to secure it. It’s hard enough to keep the real PII private without stretching our resources so thin as to cover all possible attribute-value pairs. What I’m talking about is a sense of priorities, especially for those setting information governance regulations. Leaks, breaches and theft of Information, especially Identifiable Information needs to be addressed in as strong a manner as possible. Leaks, breaches and theft of data and non-identifiable information – not so much. Let’s keep things in perspective.
Coming up on Thursday (10/26/2013) is my webinar “Authorization as a Calculated Risk”. I’ll be joined by:
- Brian Spector, CEO CertiVox;
- Jamie Cowper, Senior Director of Business Development and Marketing Nok Nok Labs;
- Gerry Gebel, President Axiomatics Americas.
I hope you’ll join us also.
10.09.2013 by Dave Kearns
If you’ve followed my writings for very long, you know I’m a strong believer in biometric authentication. (See, for example, this Network World column from over a dozen years ago) Why? Well, for one reason, unlike a password or a hardware token, you can’t misplace or lose a biometric (fingerprint, facial scan, keyboard dynamics, etc.) – it’s always with you. Now, there’s another biometric system for you to look at.
Bionym, Inc. has announced Nymi, a biometric reader that goes a step further than most others.
Nymi is a bracelet with two electrical contacts – one on the underside, which is in contact with your wrist, and one topside for you to press with a finger. Putting the bracelet on powers it up, touching the top contact completes a circuit, allowing Nymi to read your unique “cardiac rhythm” similar to the reading an electrocardiogram takes. Bionym assures us that everyone’s is unique. That signal is then sent (via Bluetooth) to a nearby device (such as a smartphone), which is running the app registered to that biometric signature and which contains the data and/or rules necessary to authenticate to other devices and services.
As long as the bracelet stays on your wrist, there’s no need to re-read the cardiac signal. Should it be removed then re-placed, you would need to re-authenticate. That also means there’s no need to actively authenticate to any registered service – which could be a great timesaver.
Along with the ECG sensor and Bluetooth transmitter, the wristband contains a gyroscope and accelerometer so that it can support gesture unlocking scenarios, also, such as different gestures to unlock your car, your PC, a door, etc.
Bionym calls the Nymi a 3-factor security system. They say that to take control of your identity you must have your Nymi, your unique heartbeat and an Authorized Authentication Device (AAD), which would be a smartphone or device registered with their app.
They also claim the Nymi is built upon the principles of Privacy by Design, explaining that this means that only you control and access your identity and personal information. Since that information is not sent to a third party, the company believes that – in future – personalization information could also be stored. They theorize, for example, that the “smart house” of the future would react to the entry of someone wearing (and authenticated to) the device by adjusting temperatures, lighting, entertainment, etc. The device includes hardware-based encryption for all transactions, by the way, further strengthening the authentication services as well as further protecting personally identifiable information.
This is a very interesting concept – it’s easier to use than passwords and safer. It still is susceptible to the problems of a token (can be lost, stolen or strayed), but a lost bracelet won’t work with someone else’s heart wave and remembering to wear it should be as automatic as remembering to put on socks. As yet there are very few applications, but Bionym says the app will be available on iOS, Android, Windows and Mac OSX initially. An open-source SDK will allow for developers to port support to other platforms.
Other news of wearable technology came from Samsung last week with the release of the Galaxy Gear, the next generation of smart device, a “smartwatch”. The smartwatch will work in concert with users’ smartphones (it won’t replace the phone yet). Among the features shown at its debut:
- Users also will be able to accept phone calls on the watch, answering calls by putting their wrist next to their heads. There are speakers and a microphone in the clasp area of the watch.
- Gear will also respond to voice commands, so users don’t always have to physically fiddle with the watch while placing calls.
- The watch also has a camera that captures short video clips — visual memos, the company is calling it — as well as photographs.
- The smartwatch will also be able to translate signs in foreign languages simply by analyzing a picture. The Gear will also support its own apps, which have been specifically designed for the watch.
The Gear would seem to be ideally matched to the Galaxy note smartphone, one of the devices labeled as a “phablet” because it’s a phone with a larger screen and form factor. The Gear would allow conversations, for example, without the need to hold a large device to your head.
All we need now is for Samsung and Bionym to get together and develop an “all-in-one” wrist appliance that could handle authentication as well as control for your devices. To me that’s the logical extension of these technologies.
Speaking of authentication and access, remember that on the 26th of this month I’ll be hosting a webinar called “Authorization as a Calculated Risk”. Joining me will be Brian Spector from Certivox, Jamie Cowper from Nok-Nok Labs and Gerry Gebel from Axiomatics . Register now.
27.08.2013 by Dave Kearns
I subscribe to a clipping service which delivers pointers to web documents to my inbox for items I’m interested in (described by keywords). Just last week, it pulled in a document about “The Changing role of the Access Control credential.” Well, access control is one of my favorite topics, so I jumped right in to it. Only later did I discover that the article in question had actually been published almost a year ago. (The clipping service is always adding new sources, and scans all of the available documents on the new source so – from time to time – I did get somewhat outdated links)
Still, it was interesting, especially when I read: “The next generation of access control credentials are expected to do more than provide door access.” I chuckled, because I’d written something very similar a decade ago when, after that year’s RSA Conference I commented on the fact that all the security vendors wanted to talk about was converging physical and logical access (that is, building and computer access) via smartcards and proximity cards. The work did start back then, but it evidently hasn’t progressed very far.
Back then, RFID (Radio Frequency Identification) was seen as the technology that would make it all happen. Some were touting it as the final nail in the coffin of passwords. You may have noticed that that didn’t actually happen.
I closely followed one experiment with the RFID enabled proximity cards for both physical and logical access. The HMO (Health Maintenance Organization) I belonged to issued cards to their doctors and other practitioners. When I visited my doctor he only had to sit down at the keyboard in the examining room and was automatically logged in to the system and able to pull up my records with no further authentication needed. The same card, when in the vicinity of a locked door that the doctor was authorized to use, would open the lock so he could go through. Initially, it seemed like a very good system.
That is, until the day I noticed that the computer in the examining room in which I was waiting kept jumping from the “please log in” screen to one which appeared to give me access to patient records! There was obviously no shielding in the walls, and a doctor accessing the computer in the next room was also activating the one in the room I was in. I pointed this out to my doctor when he arrived and, I was pleased to notice, the next time I visited the office that system was gone.
RFID technology can be read from 1 meter to 100 meters away depending on the frequency used. One meter is far too short for something to be used for continuous logical access (one of the best uses of proximity cards for computers – authorization happens when the card is read and the session is ended when the card is removed from proximity) – imagine if your screen went blank whenever you leaned back! Longer ranges, though, bring up the image of all the PCs in your office responding (and logging you in, then out) as you walk down the hallway.
Using such a card as a single factor authentication device also presents the problem of lost or stolen cards – there’s no check on what someone else can do if their holding your credentials.
Over the years, Near Field Communication (NFC) technology has gained popularity, especially as compared to RFID. The major benefit is that NFC allows two-way communication (RFID is limited to one-way). The NFC device can also store data (such as authorizations). Given the two-way nature, whenever the card is used – either for physical or logical access – that data can be updated, modified or removed, or even have new authorizations added. Still, having possession of the device is all that’s necessary for the authorization to work.
The answer there, of course, is two-factor authentication (2FA) which is gaining ground through the NFC-enablement of smartphones. It’s estimated that 2.3 percent of cellular handsets, or 35.4 million, shipped in 2011 were NFC-enabled. In 2014, the NFC attach rate is projected to reach around 20 percent and in 2016, 44 percent. This equates to shipments of 377 million NFC-enabled handsets in 2014, and 918 million in 2016. And those are conservative estimates.
Having to authenticate to the device, either through the use of a PIN or with the ever increasing number of phones that read fingerprints, faceprints and even voiceprints, before using it as a proximity device overcomes most of the problems of a misplaced device. A stolen one, of course, could be immediately de-provisioned as soon as it was reported and wiped the first time it was used after that.
It isn’t too farfetched to believe that the NFC-enabled smartphone will become the cornerstone of the Life Management Platform of the future.
Register now for my upcoming webinar “Authorization as a Calculated Risk” coming up on September 26 where we’ll explore the future world of access control. I’ll be joined by some surprise guests from the industry who can deliver unique insights on current developments in the field.
13.08.2013 by Dave Kearns
It was just a couple of months ago that trend-watcher The Next Web announced that Google Chrome had overtaken Microsoft’s Internet Explorer as the Web’s most used browser, a position that IE had held since, well, way back in the last century.
So it’s unfortunate that just last week it was revealed (yet again) that Chrome is not very protective of stored passwords.
According to a story in the UK’s Telegraph newspaper, a security flaw in Google’s Chrome browser allows anyone with access to a user’s computer to see all of their stored passwords directly from the settings panel.
Software developer Elliott Kember discovered that simply typing “chrome://settings/passwords” into the URL/search bar brings up a list of all stored passwords. While the passwords come up obscured (like: ftp://idmjournal.com dkearns ********), clicking the line brings up a “show” button, and clicking that button reveals the password! Even more shocking, Kember found there’s no way to require a “master password” that can be invoked before seeing the individual ones. Anyone with access to your computer can bring up Chrome and see all of your passwords – and the associated usernames.
Even worse, Kember found that when migrating to Chrome (he was moving from Safari, but moving from IE or Firefox works the same way) you have to import all of the saved passwords and they will all be available no matter if you want them to be or not.
Note that Ping Identity’s John Fontana just wrote about this problem and stated that it was first brought to light in 2008!
In a world where we seem to spend an inordinate amount of time talking about strong authentication, flaws like this show that no matter how “strong” we think our passwords and passphrases are they’re completely vulnerable to a flawed system. A password like “=43-9 ut0193q$#qrgvDFG1235Avpr” is no more likely to be safe than “password1”.
The first thing you’ll want to do, of course, is remove all of your passwords from Chrome. Make a note of them, and then delete them.
Next, you’ll need a better “password vault”. As I noted last fall, for many years I used a browser add-in called “Sxipper,” developed by Dick Hardt who was a co-founder of OpenID. Sxipper was not only a tool to remember usernames and passwords (as well as all the details needed to fill out forms) but was also a password generator, creating randomized groupings of letters, numerals and other characters that were well past the ability of most users to remember. But, of course, they didn’t need to remember them – Sxipper did it for them. Sxipper could save a file containing all of your data to local storage (in case there was ever a problem) but, like Chrome, this wasn’t encrypted, nor was authentication required to access Sxipper once your computer was up and running (i.e., authenticate to the OS and you could run Sxipper). Sxipper was officially killed early this year.
Even before that, though, I’d switched to using Chipdrive MyKey from SCM (now Identiv). Besides encrypting the archive file, it uses a USB stick which makes the service portable among all of your USB-enabled devices. It doesn’t, unfortunately, create passwords so I do need to be disciplined about that but it does protect the encrypted password file with a PIN. While it’s true that someone using my computer could use MyKey to authenticate, they wouldn’t be able to obtain a list of my passwords. Of course, they could login to a site and change the password, so there are potential problems.
So what can you do?
It’s simple, stop using passwords to authenticate.
Wait, you say, that isn’t my choice, is it? Well, it could be.
One of the most popular password vaults is LastPass, which encrypts your passwords and requires a master password to change them. It also syncs passwords cross all of your devices. But the real kicker – although it’s a pay-to-play premium option – is the ability to add two factor authentication (2FA) to any site you visit.
LastPass has teamed up with Yubico to enable 2FA using the Yubikey hardware token. In practice, you use the Yubikey to authenticate to LastPass which then authenticates you with the correct username/password combination. Someone gaining control of your computer would still be locked out of password-protected sites if they didn’t have your Yubikey. A single user Yubikey plus a one year subscription to LastPass premium will set you back a minimum of $33. There are bundles for enterprise customers, but no great savings.
While you still need a password to login, I see this as merely an indication of the account you wish to access (the username+password combination) with the token providing the actual authentication mechanism. That means you really don’t have to go for long, involved passwords – “password1” would be as secure as a 2000 character bit of nonsense.
Slowly, major web properties are moving to two factor authentication. Google did this some time ago, Twitter just recently announced it. But most of these rely on SMS messages as the second factor and, as last year’s Eurograbber attack was shown to have stolen over 36 million Euros through a sophisticated man-in-the-middle attack against SMS systems, this might not be your best choice. As security vendor Checkpoint described the exploit:
- Target goes to malware website, probably by clicking a link in a phishing email
- Target later goes to bank web site and malware injects instructions into the session that prompts the customer to enter their mobile phone number
- Target is instructed (seemingly by bank) to download “security update” to phone which is actually a variant of the “Zeus in the mobile” (ZITMO) Trojan.
- The Trojan then intercepts the bank’s SMS containing the all-important “transaction authorization number” (TAN), the number the target has to enter in the bank’s web site authentication screen. The Eurograbber Trojan on the customer’s mobile device intercepts the SMS and uses the TAN to complete its own transaction to silently transfer money out of the bank customer’s account.
This all happens quietly in the background with the target only becoming aware when they notice their depleted bank account.
The Google and Twitter 2FA offerings are free, while the Yubico+LasstPass will set you back $33 the first year, and approx. $12 each subsequent year. On the other hand, I think in this case you really do get what you pay for.
Do note that there are some open source possibilities which have this functionality to a greater or lesser extent. KeePass (a password manager) can be used with the add-on OtpKeyProv which provides one time password access to the KeyPass database. A modification of KeePass, called Web KeyPass, can be used with WiKID’s 2FA system – directions for this are here. Both of these solutions require a bit of DIY, but it could be educational for you to try to do that.
30.07.2013 by Dave Kearns
We had a lively webinar last week on “The Future of Authentication and Authorization”. If you missed, you can watch the replay. Essentially, what I was talking about was context aware Risk Based Access Control (RiskBAC).
The day after the webinar, I got involved in a very lively Twitter chat with a handful of the Identirati/Identorati (some spell it one way, some the other, but it’s the collective term for those in the Identity business in one way or another) about attributes, Attribute Providers (APs), Identity Providers (IdPs) and Relying Parties (RPs).
So how are these related?
Context information, which I noted could also be referred to as “metadata” (a term much in the news lately, see “Metadata: Separating fact from fiction” in the Toronto Star) is a part of the collection of attributes surrounding an identity (or “digital identity” as some would have it). As the Star’s article notes, “The truth is that collecting metadata can actually be more revealing than accessing the content of our communications.” This was neatly summed up by Ping Identity CEO Andre Durand when he said “the sum of the correlation between attributes is greater than the sum of the raw attributes themselves. “ But he said that at Digital ID World 10 years ago!
Durand later addressed this issue again when he wrote: “I read a great Editor’s note in CIO Insight this month on the effective end of privacy as corporations build massive customer databases in an attempt to better understand how, who, when and what to sell to people. In federation terms, I call this ‘attribute-hording’, the concept that companies aggregate our attributes and then leverage the aggregation of these attributes to build ever more complex algorithms for predicting our behavior.” And he wrote that in 2004.
Yes, we should have been worries about “big data” back in 2004. With the advent of the US Patriot Act (which actually was passed in 2001, then renewed in 2011) a few forward thinkers (he says, patting himself on the back) noted at that time that the very things revealed by Edward Snowden were not only possible, but likely.
Still, the revelations and the subsequent discussions about metadata make it possible to talk about context and attributes without having to offer a lengthy explanation of them.
Identity Providers (IdPs) should be familiar to most of you. Web sites such as Google, Facebook, Yahoo!, PayPal, Amazon, etc. offer authentication services (usually through OpenID Connect and the OAuth protocol). When you visit many websites, you are prompted to “login” with your Google/Facebook/PayPal/etc. ID. Generally, these sites hold some attribute/value information, but it’s also somewhat limited. (Neither Google nor Facebook know your street address or social security number, for example). Nevertheless, it’s quite possible for them to form partnerships with other web-based enterprises who do hold some attribute/value information about you. Google offers a good explanation of this in an article called an “Overview of Attribute Providers,” with the following example:
“The popular TV channel HBO operates a website today called hbogo.com that can be used to watch HBO movies. However users have to first login and provide a trustworthy assertion that HBO can use to confirm the person is paying for an HBO subscription through a cable operator. Imagine that a user who visited HBO simply saw a list of popular identity providers, and chose theirs, such as Google. Google would then ask the user for their consent to share some of their information with hbogo.com. In the future that information might include their email address and their street address. If the user gives their consent, then hbogo.com can contact the cable operators who serve the area around the user’s house, and ask if that household has a paying subscription to HBO. If so, then with two simple clicks a user will be able to watch HBO movies on their computer.
Before the user’s identity provider can help the user assert their street address to hbogo.com, the user needs to have first selected an attribute provider who can validate their street address, and link that attribute provider to their identity provider. Companies may even compete to be a user’s attribute provider. For example when a user logs into their online banking service, they may see a promotion to use that bank as the attribute provider of their street address. If they click that promotion, the banks would explain why the user might want to do this, and if the user gives their approval then the bank would redirect the user to their identity provider. The identity provider would then ask the user to consent to using this bank as the attribute provider of their street address.”
Now there’s no question that Attribute Providers will be important in the future, especially in the future of context aware risk-based access control. Some people, in fact, believe that relying parties should connect directly to the APs, bypassing the Identity Providers, since “Identity” (as we noted above) is simply a set of attributes – specifically, the set of attributes that makes you, as an entity, unique within a given namespace (your town, amazon.com, registered EIC attendees, etc.). As an example, I’m not the only David Kearns within my city, nor within my town, nor within my postal code. So saying “David Kearns, Montgomery Village, Gaithersburg, 20886 Maryland USA” isn’t unique (my son has those same attributes). Include my age, my street address, my phone number, the registration on my car, etc. and it does become unique. But there are a number of different Attribute Providers involved in gathering those values. Asking every relying party to connect with every attribute provider is really asking too much.
Identity Providers, though, are ideally placed to interact with users, APs and RPs. In fact, that’s the business of an IdP. So that when I wish to do business with a new relying party, one which accepts authentication from an IdP I use, the RP can request what it considers the necessary and sufficient attributes and values concerning me and the IdP can quickly and easily verify the current values while at the same time getting my consent to reveal them.
That’s a big part of the future of Access Control, and we’ll explore it in more detail in September in a new webinar called “Authorization as a Calculated Risk.” More details as they become available.