21.05.2013 by Dave Kearns
Another European Identity (and Cloud) Conference has come and gone, and once again it was an exciting week with packed session rooms, and excellent attendance at the evening events. I’m not sure we can continue to call it the “European” Id Conference, though, as I met folks from Australia, New Zealand, Japan, South Africa and all over north and south America. And lots of Europeans, also, I should note. Nor were the attendees content to sit back and soak it all in. At least in the sessions I conducted there was a great deal of give and take between the audience and the speakers and panelists. Most good natured and looking for information but – occasionally – it got a bit raucous.
The track on authentication and authorization – so near and dear to my heart – drew a standing room only crowd who were eager to join in the discussion. As always when AuthN is discussed, passwords drew an inordinate amount of the discussion. I reminded the panelists and the audience that no less a personage than Bill Gates predicted the “death of passwords” back in 2004. And that even within Microsoft, passwords were still in use.
Too much energy is being spent of both trying to remove username/password from the authentication process and in trying to “strengthen” the passwords that are used. Neither approach is going to be effective. Passwords, or the “something you know” are far easier to use than “something you have” (security token) and far less scary than “something you are” (biometrics) for the general public to ever entertain the idea of switching.
Password strength is, essentially, a myth. Brute force attacks become quicker every day, so hacking the password directly becomes easier every day. Phishing attacks are getting so sophisticated that there’s no need to hack a password (and possibly set off security alarms) when you can induce the user to give it to you willingly.
Two factor authentication (2FA) had some champions, but most methods have already been shown to be vulnerable to either direct attacks (man in the middle style, or MIM) or the same phishing attacks that subvert “strong” passwords. The object of the phishing attack is, after all, for the user to login with their credentials which are then subsumed by the hacker. So go three factors if you want – it’s not much stronger.
I found widespread agreement (with a few diehard holdouts) for a context-collecting risk-based system for Access Control (which I’ve called RiskBAC). Knowing the who, what, when, where, how and why of the authentication ceremony leaves the username/password combo as only one of many factors (the who). In fact, entering a username and correct password isn’t the end of the authentication but merely the trigger to begin the Risk-based Access ceremony or transaction. The other factors are all gathered automatically through system dialogs after the entry of the password has identified the account to which the claimant wishes access.
Of course, once we’re satisfied that the claimant is most likely who he/she claims to be, we then take that information into account along with the other contextual elements to determine the degree of access we’ll authorize to the resource they’re seeking.
While the presentation was called “the Future of Authentication and Authorization,” I did remind the audience that over 2000 years ago the Romans used the same methods for access control. Biometrics (what you are) was represented by facial recognition, tokens (what you have) by scrolls sealed with the leader’s ring (early use of a security signature) and passwords were, well passwords – and often changed daily to guard against leaks of the information, something more of us should do today.
There was also a contextual element to the access control ceremony when the guard, on observing the claimant, was able to identify him in the context of where he knew the face from – the morning roll call, or the guardhouse. The sealed scroll had context based on what the guard knew about the location (at the camp or thousands of miles away) and condition (alive and kicking, or breathing his last) of the official who sealed the token.
There were lots of other exciting moments – even aha! Moments – in the tracks I did on Trust Frameworks and Privacy by Design as well as in others’ session especially those on Life Management Platforms, a coming technology that many who were hearing about it for the first time agreed will be game-changing when it arrives – and that may not be too far off. If you’d like to catch up, see the just released Advisory Note: “Life Management Platforms: Control and Privacy for Personal Data” (#70745).
And there was exciting, non-Identity related, news as well. We of course announced EIC 2014 for next May but – remember up at the top of this post I said that it was a larger than European conference? Well we also announced EIC 2014 London, EIC 2014 Toronto and EIC 2014 Singapore. EIC is going worldwide, and the people involved in identity couldn’t be happier. Dates for the new venues haven’t been finalized yet, but I’ll be sure to tell you about them when they are.
07.05.2013 by Dave Kearns
For years we’ve spoken about the 4 “A”s of identity & security – Administration, Authentication, Authorization, and Audit, but maybe it’s time to drop an “A”. Maybe it’s time to speak of “Access Control” which encompasses Authentication (sometimes referred to as “AuthN”) and Authorization (referred to as “authZ”).
In many instances authorization is binary and tied directly to authentication – if a person is authenticated, then they get access to a resource. The authorization is tied only to the authenticated entity. Consider building security, for example – swipe your proximity card and you’re allowed in. Or, in rather more ancient practice, unlock the door with your key and get access. In the former case, the use of the proximity card (the “token”) is probably recorded someplace, so there is at least a rudimentary audit trail. When the key is the “token”, then there is no trail.
Until recently, the same was true concerning access to digital resources – if you authenticated to the system (network, server, application, etc.) then you got access as defined for the username you are using – most typically to a group of resources.
Note that there’s no actual proof that the person being authenticated is the same person for whom that particular account was created. The standard username/password combination that comprises the vast majority of authentication transactions today gives absolutely no assurance that the “proper” user (whatever that means) is the one being granted access. For example, I do password protect my computer (it’s a laptop that travels with me). But my wife knows the password, and has had to use it on rare occasions when I’m not available, but information is needed. The computer has no idea that it’s her and not me who is accessing those resources. Tokens do not improve this situation and biometrics provide only slightly more proof since, in practice, it isn’t the biometric (a picture of your fingerprint, for example) but a key or token created with the parameters of the biometric.
I could, of course, set up a separate account for her so that she could authenticate as herself. But for the purposes she might need to access the PC, she would need at least the exact same authorizations that I have. Creating that second account, though, reduces the security of the system. With two accounts, the risk that a breach could occur is actually doubled – the risk of my account being compromised PLUS the risk that my wife’s account could be.
The usual method of controlling authorizations for a single user is to have multiple authentications for that user, multiple identities if you will. On my Windows system, I need to sometimes authenticate as the Administrative user when I need to access system resources, install/remove software, etc. Most of the time, I authenticate as a User with a more limited set of authorizations. The same is true of ‘nix systems, where the root account is used sparingly, and only when needed. Even within applications, a similar system is observed – most of the time, I would authenticate to a database as a user, but occasionally I need to be the database administrator (DBA) in order to, well, do administrative stuff. Again, in reality, most people don’t do this – although they should – choosing the “ease of use” that authenticating as the more powerful user brings.
The bottom line is that the important thing is the authentication. Get that right (which usually means enter the correct password) and the authorizations flow: it’s all or nothing, black or white, good or bad. But with data breaches, especially the theft of usernames and passwords, seemingly coming more frequently as each day goes by (and you’d think organizations would have learned by now, wouldn’t you?) we need to do something different.
For a dozen years or so what the “thing we need to do” has been identified as is to replace the username/password combination with something “stronger”. But we’ve learned from study after study that there really isn’t anything strong enough – tokens, biometrics, “hardened” passwords are all flawed. While stealing a biometric is tougher than guessing a password, it’s a whole lot more difficult to replace a fingerprint than it is to change passwords.
As I’ve said for many years, and as I hope to re-iterate strongly at the upcoming European Identity & Cloud Conference (EIC), context, as part of a well thought out risk-based access management system, is what we need. Some use the phrase “adaptive authentication” to mean, in essence, a dynamic authentication which may require one, two or more factors depending on the circumstances. Still, this is really just one part of risk-based access control. It’s unfortunate that RBAC has come to mean Role-based AC, so we’ll need to come up with a different term – perhaps Risk Managed Access Control (RMAC).
The authentication continues as we’ve always done it – username/password, token, biometric, what-have-you, singly or in combination – but we collect context data (location, platform, date and time, and so on) and evaluate it giving it a risk metric. Alternatively we could use the inverse and call this a “trust metric” – the amount of trust we have in the validity of the identity of the person attempting the authentication. Based on that metric, we grant authorization on a sliding scale, which can be as fine-grained as your rules engine will allow.
We aren’t there yet, but we need to be. The presentations at this month’s EIC can bring us closer. You really should be there.
23.04.2013 by Dave Kearns
Larry Ponemon, of the Ponemon Institute, is well known for excellent surveys about technology issues. And Larry didn’t disappoint when he recently released “Moving Beyond Passwords: Consumer Attitudes on Online Authentication, A Study of US, UK and German Consumers” (warning: pdf file).
In summary, the report of the survey concludes:
- “The majority of consumers would use a multi-purpose identity credential to verify who they are before providing secure access to data, systems and physical locations.
- Banking institutions are considered the best for online validation and strong authentication and identity verification. Consumers in all countries believe banks would be the best to issue and manage a multi-purpose identity credential.
- The benefits of a multi-purpose identity credential are convenience (US & UK consumers) and security (German consumers). Identification and authentication when traveling, accessing the Internet and using social networks are the most popular reasons to have single ID.
- There is no clear consensus on what devices would be preferred to manage their multipurpose identity credential. However, in the US more consumers would prefer their mobile devices for identification purposes. In the UK, it is RFID chips. German consumers seem to favor biometrics.
- If consumers trust the organization, biometrics is acceptable to use for authentication.
- Voice recognition and facial scan are the most acceptable types of biometric authentication. Least acceptable in the US and UK is an iris scan. In Germany, least favored are fingerprints.
- Authentication is important when sharing devices with other users. The majority of consumers believe it is important to have authentication that securely verifies their identity on devices that are shared with other (multiple) users.”
So what we’re seeing here is that users favor stronger authentication, but also easier to use authentication (thus the preferences for mobile devices, RFID and biometrics as opposed to passwords). There’s also a strong feeling that the identity provider be trustworthy, or be seen as trustworthy: “Industries and organizations considered by consumers in all three countries as most trustworthy to safely issue and manage a multi-purpose identity credential are: banking institutions, credit card and Internet payment providers, telephone, wireless or cable services companies, healthcare providers and postal and delivery services. Least trusted are educational institutions, Internet service providers and retailers. “
The bottom line appears to be that users are looking for ease-of-use coupled with security and trust and these are exactly the issues we will be exploring next month at the European Identity & Cloud Conference (EIC). In particular, I’ll be moderating a track on Authentication & Authorization featuring a detailed look at “Versatile Authentication, Risk- and Context-Based Authentication: Why you need these Concepts”. Risk-based Access Control using context is a subject near and dear to my heart. It appears to be what the consumers in Ponemon’s survey are groping towards, without being able to articulate exactly what they want. It’s also something that seems to be gaining more traction in the marketplace, at least if I can judge by what I’m reading lately.
Chris Zannetos, CEO of Courion, recently wrote a blog post called “Context is everything”. In this look at what he calls “security intelligence,” Zannetos says:
“The activity and traffic monitors such as SIEM and deep packet inspection products have been looking at streams of information flows without the context to make sense of them. This is a bit like analyzing a baseball game by looking only at the types of pitches and result (hit, walk, out) — without understanding who is pitching, who is up to bat, what their past patterns have been, the ballpark, or the weather. In other words, the ‘Moneyball’ factor has been missing.”
< for my non-North American readers, substitute “football” (or “futbol”) for “baseball”>
And, of course, context is about more than a single packet – it’s the Who, What, When, Where, Why, and How of a transaction. Chris even alludes to a deeper context – the history of the context of similar transactions, which should be included in the analysis much like a Bayesian spam filter is used with email.
The second piece I read about context was from Jeff Rosenberg, a technical instructor in the Client Services group at Ping Identity. He didn’t use the word “context” in his blog entry called “Identity as a Rental (IDaaR),” but he did describe context-based authentication when he wrote:
“Did the user authenticate via password, certificate or one-time code? Is this user within the corporate network or coming in externally? Which training level or security clearance is required? Perhaps attribute-level permission is involved, such as LDAP group membership. When these questions are satisfied, the user checks out and the service is provided.”
Rosenberg then goes on to talk about the short-term use of particular attributes which are appropriate for the context of a given transaction, but that’s more appropriate for KuppingerCole’s discussions of Life Management Platforms, another subject that will be well covered at EIC next month.
Context, as a contributor to Risk-based Access Control, as collected for SIEM and for packaging identity attributes for short-term use is definitely a winner. And it is readily – and easily – available to most of you who use some form of SAML-based authentication/authorization system. You might wish to read (if you’ve nothing else to do right now) “Authentication Context for the OASIS Security Assertion Markup Language (SAML) V2.0” (another PDF file), all 70 pages of it.
But for today, the introduction should be sufficient: “If a relying party is to rely on the authentication of a principal by an authentication authority, the relying party may require information additional to the assertion itself in order to assess the level of confidence they can place in that assertion. This specification defines an XML Schema for the creation of Authentication Context declarations – XML documents that allow the authentication authority to provide to the relying party this additional information. Additionally, this specification defines a number of Authentication Context classes; categories into which many Authentication Context declarations will fall, thereby simplifying their interpretation.” In other words, this is a way to provide context to the transaction. Once you take context into account, then allowing a simple, easy-to-use factor (password, fingerprint, hardware token, etc.) is no longer a problem. Guessing someone’s password doesn’t get you the context in which it’s used and thus raises the risk factor for that transaction.
We have the tools, all we need is the effort to provide more secure, yet easy-to-use authentication ceremonies. What’s stopping us? Let’s talk about that at EIC next month in Munich.
09.04.2013 by Dave Kearns
2FA, it’s an abbreviation (word? acronym?) I see a lot these days. But it’s not, as I first thought, teenage texting slang (“OMG, that’s 2FA!”) for “too freakin’ amazing”. No, it’s a shortened version of “two factor authentication” which has been a hot topic and buzzword since Google announced it (although they call it “two step verification”) after the now infamous case of hacking which struck Wired magazine’s Mat Honan (see “The Honan Hack and the BYOI meme”) last summer. Suddenly everyone is writing about 2FA. Of course, they rarely mention that two weak factors can be worse than one strong factor, e.g. Google.
But two-factor authentication is really only one case within the more established paradigm of multi-factor authentication (MFA), where “multi” stands for “more than one” and might be two but could be three, four or more. And multi-factor authentication is hardly the new kid on the block – I’ve been writing about it since last century.
Yes, it was in January, 2000 that I wrote two newsletters about Novell’s new release, NMAS – Novell Modular Authentication Services. As I said at the time:
“NMAS lets network administrators choose among different authentication methods, including traditional password control and adding biometric and smart card methods. While biometric and smart card access isn’t new, it’s the control over the methods used, and the subsequent access granted, which makes NMAS a major addition to NDS security.”
With NMAS, you could specify one, two or even three factors to use for authentication and the factors could be any of What you know (password), What you have (smart card) or What you are (biometric). Depending on the factor or factors used, the administrator could further restrict the user’s access rights. Nice to see that Google, Apple and others are finally climbing onto the MFA bandwagon.
MFA is, of course, an integral part of Risk-Based Access Control (RBAC) especially when it can be optionally used depending on the risk factors involved in an authentication session.
You’ll remember, I hope (if not, go read “Passwords & Tokens & Eye Scans, Oh My!,” we’ll wait) that the calculated risk factor for an authentication/authorization event can be used to trigger multiple factors for verification in the authentication ceremony. It might simply be that someone is requesting access to high value resources, or they may be requesting access from an unfamiliar location or platform. It could simply be that the access requested is not within the user’s standard pattern of time of day or time of year (e.g., tax season). Whatever the case, a calculation of high risk should lead to multi-factor authentication for that user at that time.
In some cases (attempts to login as root or admin, for example) you should always look to MFA because the risk is always going to be high.
But it’s not just hardware tokens, biometrics and passwords that should make up the MFA mix. A lot of the contextual items you look at when evaluating risk can also be considered a 2nd (or 3rd) factor in the authentication ceremony.
If, for example, the user is accessing the network from their typical endpoint (office desktop PC, home pc, laptop, smartphone, etc.) then that can count almost as much as a hardware token. If your system then sends an out-of-band SMS to the user with a one-time password (OTP) to be entered during authentication, you might say this was a 3FA.
But how secure is 2FA, or MFA?
Noted security expert Bruce Schneier wrote (back in 2009, and referenced something else he wrote in 2005!) about hacking two-factor authentication and noted
“Here are two new active attacks we’re starting to see:
Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank’s real website. Done right, the user will never realize that he isn’t at the bank’s website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.
Trojan attack. Attacker gets Trojan installed on user’s computer. When user logs into his bank’s website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.”
Then why does everyone, it seems, believe that using two factors for authentication is better than using only one? It’s simple, if implemented properly, 2FA does reduce the risk of unauthorized access. Let’s say that the risk of unauthorized access using just a password is 1 chance in 20 (5%), which is probably a little high. Then let’s say that the risk when using a different factor (let’s say a hardware token) is lower, perhaps 1 in 1000 (.1%). What’s the risk when both are used? Statistical theory says you multiply the first factor (5%) by the second (.1%) which yields .005%, or 1 in 20,000 – a much better risk factor, I think you’ll agree! Of course, if you use a higher risk second factor (say, 1% or 1 in one hundred) then the overall risk is 1 in 2000 (5% times 1%) which isn’t as secure as the hardware token we postulated.
The important thing to remember, though, is that you need to set a realistic risk factor for each authentication factor in your ceremony. The same realistic view should also govern how you look at the various context factors when weighing the risk involved in any particular transaction.
The bottom line is that it’s all about the risk, and your job is to minimize the risk either through strengthened authentication protocols or through reduced authorization rights – or both. I’ll be going into more depth on this when I present “Versatile Authentication, Risk- and Context-Based Authentication: Why you need these Concepts” along with some lively panel discussion on the topic at the European Identity & Cloud Conference 2013 coming up next month. I hope you’ll be there.
26.03.2013 by Dave Kearns
Two documents crossed my desk this week – a survey and a “planning guide” – which fit nicely with two recent papers from KuppingerCole, illustrating a need and (unknowingly) confirming our conclusions.
The first is about the current buzzword acronym BYOD (for “Bring Your Own Device”) which my colleague Martin Kuppinger just released an advisory note about (“today it’s almost exclusively mobile devices – smartphones, tablets, ‘phablets,’ etc. – that are referred to with BYOD: a focus that is too narrow…”) but which appears to be with us at least for the near term. The new piece is a survey, commissioned by a group of Cisco partner firms led by Pine Cove (based in Billings, MT).
For this study, The group of Cisco partner firms used a randomized online sampling of full-time American workers. The group analyzed 1,000 responses. The survey population for Americans employed full-time who own a smartphone is roughly 53 million, according to the Bureau of Labor Statistics and the Pew Internet & American Life project. The margin of error of the study is 3 percent.
Among the interesting findings of the study:
- 62% of U.S. employees who use their own smartphone for work do so everyday;
- 92% of U.S. employees who use their smartphones for work did so this week;
- Only 1 in 10 workers get some kind of work stipend for their smartphone;
- 39% of workers who use personal smartphones for work don’t password protect them;
- 52% access unsecured wifi networks;
- 69% of BYODers are expected to access work emails after hours.
The one glaring number, to me, is that only 10% of the workers who use their smart device to help them work receive any sort of compensation from their employer. What that tells me is that these American organizations are not supplying their employees with the tools they need to do their job efficiently and productively.
The bottom line, though, is that IT departments should not still be discussing whether or not to support BYOD – the devices are going to be used either way. If IT is going to serve the enterprise and protect it’s resources then IT needs to quickly develop additions to their end-point management plan which covers smart devices and also quickly develop policies to bring these devices into the Information Stewardship practice of the organization.
And speaking of Information Stewardship, I just released a white paper called Using Information Stewardship within Government to Protect PII, an offshoot of the advisory note From Data Leakage Prevention (DLP) to Information Stewardship released last fall by my colleague Mike Small and myself. But what excited me was a guide written for The Online Trust Alliance (OTA) called the 2013 Data Protection & Breach Readiness Guide.
The OTA describes its mission as “to enhance online trust and the protection of users’ security, privacy and identity, while promoting innovation and the vitality of the Internet.”
One caveat when viewing the report: the OTA still uses the term “data” where we at KuppingerCole prefer “Information”. As we’ve said, “Loss or leakage of data is not necessarily a loss of information – understanding the difference between data and information is important to ensure protection.” Data might simply be a list of passwords. As such, it’s no more useful than a dictionary. But a list of usernames AND passwords – that’s information, and that could be a problem should it be leaked into the wild. So, if you read the OTA report, remember that when they speak of data they really mean information.
I bring this up because a large part of the report deals with what the OTA calls “Data Lifecycle Management & Stewardship.” As the report notes:
“OTA advocates the need to create a data lifecycle strategy and incident response plan, evaluating data from acquisition through use, storage and destruction. A key to successful data lifecycle management is balancing regulatory requirements with business needs and consumer expectations. Success is moving from a perspective of compliance, the minimum of requirements, to one of stewardship where companies meet the expectations of consumers.”
Of course, this is exactly what Mike and I outlined as good Information Stewardship.
Further, the report bolsters some of our own conclusions when it notes that “Businesses need to continually evaluate the data through each phase [of the lifecycle] and accept four fundamental truths:
- Privacy and use polices need to be continually reviewed and updated.
- The data they collect includes some form of personally identifiable information (PII).
- If a business collects data it will experience a data loss incident at some point.
- Data stewardship is everyone’s responsibility.
These four principles need to be a part of every organization’s Information Stewardship policy.
It’s now long past time for analysts and pundits to be telling you that you need an Information Stewardship policy. It’s also long past the time that you need to incorporate smart, mobile devices into your endpoint policies – and not as a separate “BYOD” policy. Your endpoint strategy should cover these devices along with desktop/laptop machines in the office, at home, and “on the road” (i.e., internet cafes).
If you’re a KuppingerCole client, ask your representative how we can help. If you aren’t – why aren’t you? And, either way, be sure to plan on being at EIC 2013, where BYOD and Information Stewardship will feature prominently.
12.03.2013 by Dave Kearns
Recently the Massachusetts Institute of Technology (MIT) held a conference on the “Future of Health and Wellness.” One of the major takeaways from the conference (according to CIO magazine) was “6 Innovations That Will Change Healthcare.” These are:
- Reality Mining: Using Data to Influence Healthy Behavior
- Social Networking: For Best Results, Group Like-Minded People
- Usability: Give Users Something Familiar
- Home Care: Make It Easy, Involve Everyone
- Emotion Sensors: For the Willing, Anything Can Be Monitored
- Wellness Counseling: Sometimes, People Like Talking to Computers
That’s certainly a good list, but hardly ground-breaking I suspect. And none of these address the problem I encountered in moving from California to Maryland a couple of years ago which necessitated a change in health providers.
Both my California Health Maintenance Organization (HMO), the Palo Alto Medical Foundation (PAMF), and my Maryland HMO, Kaiser-Permanente (KP), have good on-line presence. Both allow me to make appointments, see test results, contact my medical practitioners, re-authorize prescriptions and more through my web browser. All good, but Kaiser should definitely plan on an upgrade to address a number of usability issues.
What astounded me, though, was that there was no way to transfer information – test results, diagnoses, prescription information, etc. – from PAMF to KP! Nor, for that matter, was it possible to go in the other direction. Likewise, I could not give my KP primary care physician a “proxy” to see my PAMF data. All I could do was to bring up the data in the PAMF site, highlight, copy and paste into a word processor document, then print it and carry it to my KP physician.
That does seem like a 19th century answer to a 21st century problem. But I wouldn’t have brought it up if I didn’t have an idea about how to solve it, and the answer is LMP – Life Management Platforms.
My colleague Martin Kuppinger, in his Advisory Note “Life Management Platforms: Control and Privacy for Personal Data, ” says that: “Life Management Platforms will change the way individuals deal with sensitive information like their health data.”
Now one reason why I can’t easily port data from one healthcare provider to another is the requirements of the US Health Insurance Portability and Accountability Act (HIPAA) of 1996. Two major elements of that act are privacy and security. Summaries (if you can call a 25 page document a “summary”) are available from the US Department of Health and Human Services.
The privacy summary states: “A major goal of the Privacy Rule is to assure that individuals’ health information is properly protected while allowing the flow of health information needed to provide and promote high quality health care and to protect the public’s health and well being,” which is all well and good, but almost all healthcare providers have chosen to not allow the actual individuals described by the data to really have much control in its use – except, as I noted, using 19th century tools.
Now as Martin will tell you (and keep telling you until the concept sinks in) Life Management Platforms are much more than Personal Data Stores, much more than some personal database of attributes and identifiers. While the LMP does integrate with a full panoply of attributes and identifiers it also – and, indeed, must – interact with what another colleague, Craig Burton, calls “the API Economy.” With an open, accessible Application Programming Interface (API), My LMP could interact with each of my healthcare providers, move data to my personal health record datastore and move it to another healthcare provider as needed. For emergency situations I could foreseeably have an API connection between my healthcare data and an organization such as the MedicAlert Foundation so that up-to-the-minute information was available to first responders and other emergency service providers through the contact details on my MedicAlert bracelet or pendant – whether I was conscious or not.
Of course, most of the “6 innovations” CIO magazine was touting as necessary for modern healthcare are also easily enabled with Life Management Platforms:
Reality Mining (Innovation #1): Using Data to Influence Healthy Behavior – a mobile device collects data, stores it in your health record, then combines it with very large data sets (perhaps with a cloud based service) to correlate activities/lifestyles with both good and bad health effects. Another API-based service can then take this data and formulate a personalized health plan (see innovation #6), or suggest social networking opportunities to reinforce that health plan (Innovation #2).
Usability (Innovation #3): Give Users Something Familiar – the LMP provides a common user interface for all of its functions – a major improvement over today’s mish-mosh of services and applications designed only to stand on their own.
Home Care (Innovation #4): Make It Easy, Involve Everyone – using open APIs, caregivers as well as emergency monitors and providers could have the information they need – and, from a privacy perspective, only the information they need – to insure continued good health without the need for early institutionalization. Health monitoring systems, by the way, as part of the Internet of Things, can easily interface with the Life Management Platform thus satisfying Innovation #5, Emotion Sensors: For the Willing, Anything Can Be Monitored.
Wellness Counseling (Innovation #6): Sometimes, People Like Talking to Computers – this almost perfectly describes a relatively easy to implement LMP service that would ensure individuals understand their prognosis and course of action, but also take a major role in implementing those activities.
Hopefully you’ve seen that Life Management Platforms are just what the doctor ordered for 21st century healthcare. But the really amazing part is that healthcare is only one small area in which LMPs can improve our lives. Banking, insurance, shopping, travel, lifestyle, food, social and work responsibilities – almost everything we do in our lives can be enhanced through a Life Management Platform.
We’ll be covering LMPs (and the API Economy as well as the Internet of Things) in more depth at the European Identity and Cloud Conference in May. I hope to see you there.
26.02.2013 by Dave Kearns
I read a lot. Mostly about identity, security, the cloud and other tech topics, but because I’m a writer I’m also interested in the tools of the trade. That’s why, every week without fail, I read the World Wide Words newsletter. Through it, I find out about words such as this past week’s “nidicolous” (“If your offspring are proving recalcitrant or obstreperous you may like to hurl the epithet nidicolous at them. It will be accurate and tantalisingly unclear; it might even provoke them to crack open a dictionary to discover whether you’re insulting them.”) No, I won’t tell you. Go to the web site.
The reason I bring up World Wide Words here, though, is that this past week it intersected with what we’re talking about at KuppingerCole: The Internet of Things.
Editor Michael Quinion says about it that “[t]he reference is to the way that equipment of many kinds is now fitted with embedded computing technology, not only the obvious items like telephones and video recorders but also your car, your washing machine and your refrigerator as well as your lightbulbs. It is no longer futuristic fiction to suggest your refrigerator might be able to report you’re low on bacon or eggs and order up fresh supplies. Or that a bathroom cabinet might monitor your pill consumption to remind you to take the next dose, organise refills and allow your doctor to supervise your case.”
That all seems very reminiscent of a presentation I gave in the fall of 2000, and recalled here last spring in “Back to the (digital) future”. The Internet of Things and Life Management Platforms are inextricably intertwined.
But it’s wrong to think of the Internet of Things as somehow separate from the internet of people. It is really an Internet of People, Things, and Services (IoPTS). In fact anything which can be uniquely identified on the ‘net is part of the mix. It’s a given in the identity business that the use of cloud services is architected on an identity foundation. It’s also fairly evident to all that identity is the basis of regulatory compliance. The reality, which not everyone will admit as yet, is that Identity is the foundation of every transaction that occurs on the internet. But it’s becoming more apparent all the time that it’s not just the “who” identity that is important, but also the “what” and the “where” (i.e., the platform that the “who” uses to do the “what”). In order to deliver cloud services properly, the provider needs to know the user, the user’s permissions, the user’s capabilities and the user’s needs. The “needs” include precise data on the service, its version and its optional components. The “capabilities” reflect the hardware platform the user will use the service on.
In order to correctly log and audit activity for regulatory purposes, the compliance service needs to know precisely who is doing what to which information and where that activity is occurring. All of this requires that we can easily, automatically and uniquely identify the services, applications, and platforms that are being used as well as the attributes of each that are necessary to make a decision (for cloud services) or satisfy a policy (for regulatory compliance).
In order to communicate with others we need to be able to be sure of their identity, and they of ours. That holds true whether we’re talking to another person via email, to our friends and family on Facebook, an ecommerce retailer, our work, our schools or our government – we need to be sure of who they are and they need to be sure of who we are.
Identifying devices is an outgrowth of both manufacturing and inventory control. A manufacturing bill of materials could be considered an identity document (with a serial number as a unique identifier) containing a list of attributes (the parts specifications) for an identified “thing.” Inventory control, carried to its limits, uniquely identifies not only each desk in an organization but each drawer in each desk – and possibly each pencil in each drawer.
Less tangible items, such as applications and services, don’t have quite the same legacy of identity. When all services were located in the datacenter or server room and IT went from desk to desk doing installations it wasn’t necessary for the user to be able to identify the service in any meaningful way. There’s versioning, but that doesn’t identify a specific instance, just the general code. Each instance of a non-trivial service or application will also include parameters unique to the time, place and users involved in its execution. And when that service is cloud-based it’s all the more important that it can be identified as the specific, and valid, instance we expect it to be.
A full-blown identity management solution will have to understand that it’s no longer just about people. While personal identity will remain important, a new superset of identity will emerge. Prakash Ramamurthy, now Chief Product Officer at LifeLock, called this “entity identity” when he was working on IdM at Oblix (and he said that with a straight face). That’s catchier than IoTPS for sure, but whatever we call it, I expect you’ll be hearing a lot more about these things in the months to come.
At the European Identity and Cloud Conference in May we’ll be talking about IoTPS. Among the topics we’ll explore:
- Connected Objects, Real World Internet, Web of Things: Visions and Business Models in the IoTPS World
- Reference Models and Initiatives and Architectures for the Internet of Things
- IoTPS Security and Privacy Concerns and how to address them
- Connected Vehicles, Life Management Platforms & The API Economy
There’ll also be a lot more about Life Management Platforms. The really forward thinkers, the futurists among you, will do everything they can to cover both topic areas. They’re connected now, and they’ll be totally intertwined in the future. See you in Unterschleißheim.
12.02.2013 by Dave Kearns
At last week’s Kaspersky Labs Analyst Summit, Chief Marketing Officer Alexander Erofeev said that for 2013 the phrase “protecting who you are” would be the theme for the company. This made me pause and think about “who you are” means. Of course, as an Identity Management analyst my first thought was that it was identity, and identity attributes, that Erofeev was talking about. But further reflection (and the rest of his presentation) led me to understand that it’s really Information Stewardship that the company is leaning towards – even if they don’t use the term.
For the corporate entity, “who you are” encompasses all of the data – both structured and unstructured – that makes up the information that not only constitutes the wealth of the organization but also the identity of the organization. Think, for a moment, about those enterprises you know and “who they are”. How do IBM and Apple differ? Mercedes and KIA? Lufthansa and Air France? Sure, corporate identity owes a lot to marketing, for how a company wants to be perceived, but beyond that marketing we still form impressions about the organization based on the data we gather from non-marketing sources.
Personal data, intellectual property and non-public information (financial information, mergers and acquisitions data, executive communications, legal and regulatory matters, etc.) taken together make up a large part of the “who you are” for the organization. Protecting it is just one part of Information Stewardship (for more on the broader topic, see the KuppingerCole Advisory Note: “From Data Leakage Prevention (DLP) to Information Stewardship”).
Of course, the best way to protect who you are according to Erofeev is to install the full suite of Kaspersky products, which they bundle as “Kaspersky Total Security For Business”. This includes:
- Cloud-assisted protection via Kaspersky security network
- Application control
- Application whitelisting
- Web control
- Device control
- File server protection
- Mobile device management (mdm)
- Mobile endpoint security
- Systems configuration and deployment
- Advanced vulnerability scanning
- Network admission control
- Patch management
- Mail server security
- Web / internet gateway protection
- Collaboration server security
During his session, Merkulov also reviewed what’s on the roadmap for 2013. This includes more SIEM integration (IBM, HP, RSA), patch management for more platforms (Linux distributions, Macintosh, etc.) and greatly increased compliance and regulatory reporting and management.
That certainly appears to be comprehensive. And “comprehensive” was the point that Kaspersky Chief Product Officer Petr Merkulov was trying to make when he said that companies who use security apps from many different vendors risk leaving open cracks for malware to slip thru; that with a comprehensive solution through a single management console, bad things were less likely to happen.
This, of course, resurrects the age-old dichotomy – what’s better, an integrated suite or a group of “best of breed” solutions? It was in 2004 that I wrote “the argument will always rage as to whether it’s better to purchase best of breed products from several vendors or a homogenous suite from a single vendor.”
I’ve generally come down on the side of the tightly integrated single-vendor suite offering. As I pointed out at the European Identity and Cloud Conference in 2011, you need the services assembled in a seamless way so that there are no holes and so that it Just Works. You need this, of course, because almost all of the data breaches we see involve holes in the system, poor connectivity of security and protective services or took advantage of manual procedures to evade policy. Additionally, during that same conference, I discovered through feedback from the attendees that the “Best of Breed” label may be on its last legs. The problem is that no one can agree on what’s best. As someone said, if there really could be an objective “Best of Breed” then we’d only have one religion and one political party! Instead, attendees seemed to coalesce around the thought that what was important was that they have the services that they perceived were the ones they needed. Interestingly, though, the folks who gave me feedback also said that the number of vendors delivering these services was irrelevant. More vendors require more diligence on the end user’s side to be sure that everything works together seamlessly with no holes, but the benefit of getting the right services outweighed the benefit of a single vendor providing better connectivity and “fit.”
So it’s not a question of having the “best” (whatever that means) anti-malware coupled with the best mobile device management coupled with the best patch management, etc. What you need is apps and services covering all the important (to you) areas without holes for bad things to slip through. A solution like Kaspersky’s makes this easier, as they cover many different areas in a single suite, but that doesn’t mean that some other suite (or even stand-alone services) couldn’t form the basis of the security part of your Information Stewardship. After all, Information Stewardship is about much more than security so, of necessity, there will be multiple vendors represented in the products and services you’ll be using.
Nevertheless, when assembling the tools for you Information Stewardship toolbox, you won’t go far wrong by including Kaspersky’s Total Security for Business.
Information Stewardship and Information Security are just two of the themes for this year’s European Identity and Cloud Conference (EIC 2013) coming in May. If you aren’t registered yet, what are you waiting for?
29.01.2013 by Dave Kearns
Over 25 years ago I started in the networked computer field worrying about authentication, usernames and passwords. And despite all the weeping and wailing about passwords in the intervening years, I still spend an inordinate amount of time thinking, writing and speaking about them.
Just last week, Oracle’s Mike Neuenschwander (formerly with The Burton Group) organized a lively tweet chat on authentication issues (search Twitter for #authchat to see what’s left of the thread) which showed surprising agreement about the future of passwords for authentication.
The week before, Google had announced what the press called a “war on passwords,” when they rolled out a beta project to use Yubikey hardware tokens in a Near Field Communication (NFC) system for authentication. My colleague Martin Kuppinger looked at this possibility just the other day, and liked it – with cautions.
The thing is, most analysts, IT execs, security professionals and others with a stake in authentication services agree that passwords should be removed from the process. They’re inherently insecure, someone recently speculated that the “secure lifespan” of a new password (the time it would take to crack it) is down to less than a second, on average.
The hundreds of data breaches in just the past year unveiled hundreds of thousands of passwords. While many bemoaned the simplicity of the passwords people choose (“password,” “qwerty,” “12345,” “monkey”) and demand more complexity others note that the more complex the password the more often a user will write it on a sticky note attached to their monitor.
It was just over a year ago I suggested going to a Privileged Management (PxM) system, called by some Privileged User Management (PUM), and by others Priveleged Account Management as sort of an SSO on steroids answer to the password problem.
The basic idea is that passwords are here to stay, and all we can hope to do is to reduce our dependence on them or reduce their exposure. By configuring the PxM-SSO system to reset passwords after every use; to use complex combinations of letters, numbers, upper/lower case and other marks; and to never reveal the password chosen that we increase significantly the amount of time needed to break the password. In other words, if we make it more expensive to break in to the system then the value derived from the break-in we reduced the incentive for the cracker. As the old proverb goes, when a group is being chased by a bear you don’t have to be the fastest runner just faster that the slowest.
Of course, as many have pointed out, that “solution” doesn’t remove passwords at all. That’s very true, but also well beside the point. There is no solution that, in the foreseeable future, will remove all reliance on passwords, especially when we speak about the mobile market.
Ever since Apple acquired AuthenTec (maker of 2D finger print sensors), there’s been talk that the next iPhone (either the 6 or the 5S) will include a fingerprint reader. I’ve been waiting for that development for four years, sBut as one wag commented, most smartphones have shiny surfaces which any thief should find to be very thick with fingerprints that could be lifted and reused .
Whether or not that development leads to a use of biometrics rather than a PIN to unlock a smartphone is still problematic. Too many people seem to make a living out of denigrating biometrics for authentication. Lurid tales of people having their digits cut off to fool fingerprint readers are scary, if not very truthful.
Near Field Communications (NFC), used in devices such as the Yubico Yubikey being looked at by Google, is simply a refinement of RFID with a much smaller range. In this case, smaller is supposedly better. RFID has been criticized for broadcasting data too far, allowing the nefarious to “eavesdrop” on communications and harvest all sorts of interesting “stuff”. But NFC devices still have the same flaw, what could be the same fatal flaw, that RFID devices have. It’s the device itself that’s being authenticated no matter who is in possession of it. No matter what form factor the NFC device takes – the credit card sized bit of plastic that Martin favors or the wearable ring that Google is talking about – it’s still quite possible for it to fall into the hands of the cracker with no need for lopping off fingers.
Proponents of NFC devices say that multi-factor authentication is the key. Invariable, this leads us back to a password, a PIN, a passphrase or other shared secret. Well, for many it does. I still prefer a biometric as the second factor. But those who feel a password can be used frequently call for a one-time password (OTP) distributed out-of-band (perhaps by SMS message). The recent Eurograpper exploit should give us pause when considering that solution.
Last week’s #authchat tweet chat pretty much concluded in general consensus about two things: 1) password authentication is bad; 2) passwords are going to be around for a long time, although perhaps in a diminished role.
Winston Churchill once said: “democracy is the worst form of Government except for all those other forms that have been tried from time to time.” Could we paraphrase that and say passwords are the worst form of authentication except for all those other methods that have been tried from time to time? And if we will continue to use passwords, in one way or another, which is the best way? Should we have one strong password we use for all authentications? Different strong passwords for each authentication? A combination where an SSO/PAM handles different strong passwords for each authentication while being accessed by one strong password at initialization?
Here’s the question I want to leave you with today: if you want to protect your eggs, do you hide them – individually – all over the farm or do you put them all in one basket and hide it under a hay rick?
In my days as a network manager I learned that there’s always a bottleneck in the network where traffic slows. When you fix that, then another place becomes the bottleneck. It’s the same with authentication – we can always identify the weak point, but when we fix it doesn’t something else become the weak point?
15.01.2013 by Dave Kearns
Last time out, I ended by saying “Next time we’ll take a look at two ideas that, hopefully, will be the talk of 2013.” I lied. Depending on how you look at it, it’s either four ideas – or one idea. And there’s sure to be a buzzword/phrase/abbreviation/acronym or two coming about from it – or them.
I do know that there are four concepts, known fairly well within the identity community, that need to coalesce to create a grand scheme which can be turned into a buzz phrase and picked up by the general media so let’s take a look and see how they’ll fit together.
At the root of the grand idea is The API Economy. There’s much more about this in our Advisory note, “The Open API Economy,” but here’s a quick run-thru:
“The core thesis of this document is that the entire industry is moving towards exposing core competency to programmers, partners, customers and other constituents through APIs. The trend is so important to the current and future success of organizations that it is referred to in this document as an “economy”. The word “open” is often added in front of API and the API economy. Open does not mean “free” necessarily, but is intended to mean “accessible.” Accessibility is measured by both availability and how well it is documented for use.”
With the number of available APIs for services growing at an exponential rate, the second component of the grand idea can become more than just an abstract concept.
The personal data ecosystem (PDE) is an idea we’ve kicked around in one form or another for over a decade. As Ottawa Privacy Commissioner Anne Cavoukian has defined it:
“The Personal Data Ecosystem (PDE) is the emerging landscape of companies and organizations that believe individuals should control their personal data, and who make available a growing number of tools and technologies to enable this. Aside from legal requirements, the starting premise of the PDE is that individuals control the sharing of their own ‘official record,’ (also called a ‘golden record’) and set the rules as to who can access and use their personal information for what purposes. In this way the individual becomes the central point of data integration, and individuals always have the ability to extract their data and take it wherever they wish.”
But suppose you could take charge of not only distributing your “official record” (from the PDE) but also building and storing it? The API Economy makes that possible. I’ll show you how in a moment.
The third component of the grand idea is enhanced privacy. When you have control over storage and distribution of your personal information, such that other parties don’t need to store data, nor acquire more information then they need, that really enhances the privacy of your information. Let’s take a look at a fairly common example.
You’re thirsty, and head into the pub for a beer. As things work today you’ll most likely need to show some form of government identity document (which includes your birth date) to the server. But that document also includes your name, address, and perhaps other personal information. Many women have found themselves subject to unwanted stalking after showing a driver’s license to a bartender on the make.
But let’s say you could send a message to an identity provider (IdP) who could vouch for the fact that you are of age to order a drink – or simple vouch for the fact that you are “over 18” or “over 21” (no need for the IdP to know where you are). The IdP checks with the source of authority for your age (most likely a government agency) and is assured you are “over 21”. The IdP sends you a message and an encrypted URL which you transfer to the pub’s authorization system which verifies that the answer is “over 21” and that it comes from a legitimate authority. So the pub knows you’re of legal age, but nothing else, and the IdP knows only that you asked a question and they answered it. That’s enhanced privacy.
So how do we get there?
That brings us to the fourth concept of the grand idea – a trust framework. The IdP needs to trust that you are who you say you are; the government agency needs to trust that the IdP is legitimate (as does the pub). All of this will be possible through the use of a trust framework to which all of the parties are subscribed.
Put all these parts together – API economy, personal data store, enhanced privacy, trust framework – and you’ve got what we at KuppingerCole call a Life Management Platform.
For a very detailed look at what we mean see Martin Kuppinger’s advisory note: “Life Management Platforms: Control and Privacy for Personal Data,” but here’s what we’re talking about in a nutshell:
“Life Management Platforms will change the way individuals deal with sensitive information like their health data, insurance data, and many other types of information – information that today frequently is paper-based or, when it comes to personal opinions, only in the mind of the individuals. They will enable new approaches for privacy and security-aware sharing of that information, without the risk of losing control of that information. “
Martin particularly notes that “Obviously, Life Management Platforms are far more than Personal Data Stores. They not only support a secure store for sensitive personal information. They allow making a better use of that information.”
In fact, a Life Management Platform that combines an API economy, a personal data store, enhanced privacy, and one or more trust frameworks is how we will manage our information and our lives in the years to come.
There’ll be a lot more about this – both the Life Management Platform and all of its components – at the European Identity and Cloud Conference in May and I fully expect (“predict” isn’t the word I want to use here) that the main stream media will be talking about Life Management Platforms by the end of the year – but I can’t foretell what terms, or buzz words, they might use!