RSA SecurID again

23.03.2011 by Martin Kuppinger

I’ve blogged last week about the RSA SecurID case. In the meantime there were several other posts and advices on that and I’d like to put together some thoughts from my side about that, looking at what customers should do now.

What should existing customers do short-term?

In most cases, RSA SecurID will be a standard mechanism for strong authentication which can’t be replaced immediately. If customers don’t use a solution for versatile authentication they usually aren’t able to opt for another (stronger) authentication mechanisms on the fly. Not using RSA SecurID however will make things even worse, because that would mean to step back to one factor with one or two means for authentication. Thus it is about staying with RSA SecurID and deciding about which additional actions to take – “compensatory controls”, e.g. increased auditing, additional fraud detection technologies, and so on.

Customers who have a versatile authentication approach in place might evaluate whether they can replace RSA SecurID with another factor – which then would be, for time and logistics reasons, an approach not depending on hardware. However doing that will be somewhat complex (helpdesk calls, technical aspects,…). Thus customers should first check whether the increased risk of using RSA SecurID is acceptable or not. Instead of replacing the option of adding another factor/means for interactions and transactions with high risk appears to be most appropriate. Besides this, the actions mentioned abovr in auditing have to be implemented.

What should existing customers do mid-term?

Replacing a technology like RSA SecurID is quite expensive. Given that RSA will harden its own systems and seeds can be changed over time, the threat will decrease. However, as mentioned in my last post, RSA SecurID never will be the same again. The mid-term answer, from my perspective, is versatility. Having more options for quickly changing to other and additional factors and means for authentication is the most promising approach. Thus, RSA SecurID is just one of multiple approaches.

For high risk environments, biometrics might come into play again (if not used yet). In addition there are some approaches of two-factor authentication which don’t rely on seeds and secrete algorithms. However they aren’t necessarily absolutely secure (if anything could be absolutely secure), thus customers should carefully evaluate whether other approaches provide real advantages above the established RSA SecurID approach. The same level of mistrust should be used for all types of authentication.

What should potential buyers do?

It is about re-evaluating the strategy for authentication. Versatility is key – and the strategies need to be re-thought if they are not focused on a versatile approach allowing different types of authentication mechanisms to be used and exchanged flexibly. Regarding RSA SecurID, the risk has to be rated again and decisions about whether the approach is sufficient for the interactions and transactions which have to protected have to be reviewed. From my perspective it is not that much about not using RSA SecurID (depending on what RSA does to increase security again, for sure – but I assume they will do a lot) but to carefully analyze the level of protection provided and weigh this against the risks of authentication fraud for what has to be protected. When deciding to use RSA SecurID appropriate controls have to be implemented – but that is true for any other authentication mechanism as well.

By the way: Regardless of the RSA SecurID approach, any authentication strategy which doesn’t focus on versatility, risk-based authentication/authorization and context-based authentícation/authorization should be re-thought.

Some general thoughts:

RSA has had a very strong image for their RSA SecurID approach – and it worked for many years. However there are two fundamental issues:

  • Centralized seeds
  • Confidential algorithm

Both are risks of that mechanism. Thus security is obviously limited. Regardless of which approach you use, thinking about the potential weaknesses (social phishing; central stores which might become target of attackers;…) is important. Unfortunately, security comes at a price, because there aren’t simple, cheap, easy-to-use approaches without logistics cost and other shortcomings which provide perfect security.

Again, like mentioned in my last post, we will discuss things like versatile authentication and the RSA SecurID incident at the EIC 2011. You shouldn’t miss that event.


Context-aware, information-centric, identity-aware, versatile

03.02.2011 by Martin Kuppinger

Recently another analyst company had a presentation titled “The future of Information Security is context- and identity-aware”. Yes – but not that new. I remember that we had the context-based approaches as a key trend at our second European Identity Conference, back in 2008 (thus the upcoming EIC 2011 is IMHO the best place to learn about the new trends and the best practices for today around IAM, Cloud Security, GRC, and related topics).

I personally think that there are some important aspects to consider when looking at the overall topic of Information Security:

  1. First of all: It is about the I in IT, not the T. It is Information Security, not Technology Security. That is information-centric.
  2. You need to have the organizational structure, the processes, the policies in place before you look at technology.
  3. You need standards around information security for your entire application environment to reduce the grass root seecurity approaches and islands.
  4. Context is an important thing. Context defines criteria to understand the risk of interactions and transactions.
  5. Given that, it is mainly about risk. Context helps you in better dealing with risks, but the core thing is risk.
  6. Regarding identity-aware I’m a little reluctant. That is correct in the sense that there is little value in just looking at information or systems but not the identity. Look at DLP: Not allowing to transfer information is wrong – it is about allowing only the right people to transfer the right information. In that sense, identity-aware is important. Have a look here (not that new…) where I have put DLP into context. But you should be careful – it is not necessarily about a 1:1 mapping person:identity. There are situations (think about identity federation) where it might be a role, a group of people.
  7. Versatility is as well important – the flexibility to authenticate people in a flexible way, which is a prerequisite to support all types of potential users, internal as external.

Information security is a key topic for every organization (and not only the IT department). Following the principles above should help you to better understand the value of technical approaches. Technology which doesn’t support the principles and is not “backed” by the organizational structure, processes, and so on will only have limited value to achieve your targets around information security.


Oracle acquires Passlogix

06.10.2010 by Martin Kuppinger

Oracle has announced that they are acquiring Passlogix. That is no real surprise to me. Oracle has been the last large OEM partner of Passlogix for their E-SSO (Enterprise Single Sign-On) solution. Others like IBM had decided for own solutions in the past. Passlogix had some success in direct sales, but being a niche vendor they probably had to decide between an exit strategy or significant investments to expand their own portfolio.

From an Oracle perspective, the acquisition definitely makes sense. Oracle mentions “tighter integration” as the opportunity behind that deal. And that exactly is what the deal is about. E-SSO currently is in a transition phase, from a very focused and specialized solution towards an integrated element within authentication and authorization concepts. Versatility, e.g. the capability to flexibly support different authentication methods in sort of a plug-and-play approach, combined with step-up authentication and other concepts, is just one example of new trends in the SSO market. Integrating E-SSO and Web Access Management as well as Identity Federation is another. And the potential of bringing together Oracles Adaptive Authentication Manager, e.g. risk-/context-based authentication, with E-SSO (e.g. E-SSO based on risk and context) is obvious as well.

With the acquisition, Oracle opens the door for new, integrated approaches beyond classical, pure-play SSO. That fits into what IBM has done when acquiring E-SSO technology or Novell with buying a source code license from ActivIdentity – all players want to better integrate E-SSO with other solutions and all want to have the flexiblity in their product strategy they never can have with an OEM product. What can be done with integrated approaches has been demonstrated by Evidian for quite a while – one consolidated access management.

Thus it will be interesting to observe where Oracle starts to deliver on the idea of integrating E-SSO with other technologies. Even while I overall rate integrating E-SSO positively, there is one aspect which should be kept in mind: A strength of the pure-play E-SSO solutions is that they aren’t intrusive with respect to the existing IT infrastructure. Thus they are very easy to deploy and provide a quick win potential. This advantage shouldn’t be given away.


Why we need claims in Windows

21.04.2010 by Martin Kuppinger

Microsoft has introduced the concept of claims-based securitywith it’s “Geneva” project. Claims are sort of attributes which are provided by identity providers in the form of tokens and consumed by applications. In fact they are one way to make federation easier and more user centric. “Geneva” provides the tools at all levels to work with claims. The concept of claims is used by some other groups at Microsoft and we probably will see several Microsoft applications with support for claims within the next months.

However, the biggest impact might be on the Windows operating system itself. Claims could make that much more flexible from a security management perspective than today’s mainly ACL-based security model. ACLs are too static and too complex in management to really fulfill the customer needs today. Not only in Windows, but in other operating systems as well. If you think about an operating system which consists of services (Service Providers, Relying Parties) and relies on Identity Providers to provide claims, the entire Security Management can become much more efficient. Based on Policies, using dynamically provided claims. Authorization might be done by the services based on policies and claims or by specialized authorization engines within the operating systems on behalf of the services (the latter not yet being part of “Geneva”).

It is, without any doubt, not that easy to perform such a fundamental change. ACLs are at least somewhat understood, claims are new. There has to be a migration path and compatibility. But if we look at all the options we have, claims appear to be the most promising concept for the future security at the operating system level. One interesting side effect is that the same policies might be applied to other elements in the security infrastructure as well – external access management tools and so on.

Meet me at European Identity Conference 2010 and Cloud 2010 Conference, Munich, May 4th to 7th.


Versatile authentication – break-through for mass adoption of strong authentication?

11.03.2010 by Martin Kuppinger

Versatile authentication is one of the hot topics in IT – more and more vendors start to support it in some way or another. Versatile, a not that common term, means the ability to flexibly switch between different authentication methods. In practice, versatile authentication solutions shall support at least the following features:

  • Flexible use of different authentication methods.
  • Simple plug-in of additional authentication methods, e.g. extensibility.
  • Flexible interfaces for applications OR integration with existing technologies which interface with other apps.
  • Support for step-up authentication and other more advanced approaches.

Other aspects like fallback methods, management support for handling the token logistics and so on are value-adds, depending on the implementation of the versatile authentication technology.

Read the rest of this entry »


How much security do we need?

04.02.2010 by Martin Kuppinger

My colleague Jörg Resch blogged today about the ignorance regarding layered security approaches. Yes, there is no absolute security. Security is something which is tightly related to risk. Given that we can’t have the perfect security, especially not with people using systems, it’s always about the balance between the security-imposed risk and the cost of risk mitigation.

That’s a very simple balance: The higher the risks are the more you can and should spend on risk mitigation – as long as risk mitigation is feasible (which is not always the case – a life insurance doesn’t help you mitigating the risk of dying…). I thoughtfully used the term “security-imposed risk”. It is NOT about security risks, but about the consequences of security-related incidents. Stolen data and their abuse, illegal transactions, customer loss due to a decrease in credibility,… – that’s what it is about.

But that doesn’t change the fundamental: When thinking about security we have to think about risks. I’ve blogged about Risk Management before. What we have to understand is that there is not THE information or system which has to be protected. We have different types of systems, information, and transactions which are at different risk. And we have to apply security (technology and organization) according to the risk associated with these different systems, information, and transactions.

There is not THE level of security you need. You need appropriate security for different types of transactions and interaction (and the related systems). Using risk as the main criteria in decisions about security investments helps to optimize what is done in IT security. And focusing on few consistent approaches at different levels (for example few different types of authentication with step-up features and so on, based on a versatile authentication platform; for example a consistent authorization strategy with few consistent levels of management and protection) will be much cheaper than spending too much money for point solutions like many (not all) of the DLP tools out there.

Understanding that different types of interactions and transactions have to be protected differently is the key to succesful IT security concepts. Risk is the core criteria to do that. Interestingly, that is not really new. What governmental and military organizations are doing in “information classification” (having started long before the invention of the computer) is nothing else than using risk as a criteria and definining different levels of protection for different interactions and transactions. Such concepts don’t have to be extremly complex. But a differentiated view has to be the guideline for everything which is done in IT security.

To learn more about this and to discuss this with your peers, have a look at our upcoming virtual conferences and our European Identity Conference 2010.


XACML – why it is so important

22.10.2009 by Martin Kuppinger

XACML (eXtensible Access Control Markup Language) gains an increasing attention as one of the core standards in the field of information security and thus IT security. Whilst standards like SAML (Security Assertion Markup Language) address the problem of authentication, XACML is about authorization – the more complex threat. XACML allows the definition and exchange of authorization policies in a heterogeneous environment. Whether it is about cloud security and controlling the authorization policies of cloud services or about SOA security for internal applications: XACML supports the authorization management in such use cases.

However, there is no such thing as a free lunch: XACML not only tools like XML/SOA Security Gateways which support that standard or cloud services with XACML support. There are two other important aspects:

  • XACML in fact means a shift from a more static security approach like with ACLs (Access Control Lists) towards a dynamic approach, based on policies which are applied at runtime. These dynamic security concepts are more difficult to understand, to recertify, to audit and analyze in their real-world implications. Thus, the use of XACML requires not only the right tools but well-thought concepts for policy creation and management.
  • XACML is just a foundation to express policies. Within a use case, policy concepts have to be defined. Over time, there should be higher level standards or defined use cases building on XACML and focusing on a standardization of the content of these policies.

Anyway, XACML is very useful. One of the most interesting areas for XACML is SOA Security. Currently, many SOA-based applications still lack a valid concept for authorization. Authorization still frequently is built into these applications. XACML can provide the policies to externalize the authorization management and thus add flexibility to SOA-based applications.

Overall, it is – from my perspective – definitely worth to spend some time exploiting the potentials for XACML to improve the security of systems and applications. There are many areas where XACML can be used successfully today. However, like with any emerging technology, there will be a lot of improvements in the managing and consuming applications (and, hopefully, around the standards ore use cases building on XACML) over the next few years. Thus the step to XACML has to be considered carefully. The good thing is: It is about standards, thus the risk of lock-in isn’t that big.

We will talk more on depth in an upcoming webinar. Register for free!


The secret leader in context-based authentication and authorization?

19.06.2008 by Martin Kuppinger

Context-based authentication and authorization is one of the topics which have the potenzial to become the next hype. I’ve posted twice on this subject, here and here and we had, led by Dave Kearns, a lot of discussions around this at our EIC 2008. I’m convinced that the topic will become even more important at next year’s EIC.

Besides the ones which are obvious players in that future market segment like the risk-based authentication vendors (Arcot, Entrust, Oracle, RSA and some others) there are some other categories of vendors which offer even today at least some context-based authentication and authorization. One of them is Citrix. Given the number of installations of the Citrix Access Gateway they might even be sort of the leader in that market.

You might argue: A SSL Gateway is not a solution for context-based authentication and authorization. Yes – and no. No because a SSL Gateway without additional components is just a SSL Gateway. Yes, if you combine a Citrix Access Gateway with other things. At an Citrix Analyst Briefing yesterday, a Swiss bank talked about their approach for controlling access of remote workers. They use the Citrix Access Gateway together with many other Citrix technologies and with a NAP (Network Access Protection) tool from EPA factory.

Read the rest of this entry »


Why SSO is so popular in these days…

26.10.2007 by Martin Kuppinger

Our upcoming Identity Management market report 2007/2008 shows some interesting results. Not to surprising, at least most of them, but nevertheless pretty interesting. One important information is where the money will be spent next year. For sure there is Identity Provisioning. And, as expected, Role Management is a very important area. Besides these both areas there is Single Sign-On as the third topic on which a lot of money will be spent within the next 12 months. More than 30% of the survey participants will implement SSO, will enhance their implementations significantly or will replace the technology which they use today. Another roundabout 30% will optimize their existing implementations. Less than 30% of the companies won’t spend money on SSO.

The question behind is for the reason why. There are some aspects. SSO helps the users. It eases their lifes with less user names and passwords. SSO makes the user the admin’s friend. Another aspect is compliance. SSO might help in achieving some of the targets of compliance, at least in (the strongly recommended) combination with strong authentication.

It is easier to audit who is allowed to access which applications, who actively uses accounts in which system and who has accessed which system when. Upcoming trends like the integration with events from phyiscal access systems, thus doing the step towards context-based authentication and authorization, enhance the support for compliance requirements.

From my perspective, these two aspects – user friendliness and compliance support – are the most important driving factors for the success of SSO. Besides, SSO is pretty mature, at least the Enterprise SSO solutions which are most common today. But also token-based approaches like the use of Smartcards with certificates and other credentials stored on the tokens shows an increasing maturity, lower costs and a broader availabilty of devices.

Thus, if you haven’t solved your SSO issues until know, start thinking about. But when you think about, don’t remain with an internal solution like Enterprise SSO but think about the future. SSO for your customers through support of OpenID, CardSpace and other technologies shall as well be part of your SSO strategy (look at some of our downloads…) as the role identity federation will play in the next years.


From risk-based to context-based authorization

20.10.2007 by Martin Kuppinger

Dave Kearns, who will contribute as a track moderator and speaker to our European Identity Conference 2008, has introduced the term context-based authorization (and influenced my thoughts on this topic – thanks to Dave) as an approach for basing authorization on the context in which a user acts, which goes beyond the risk-based authorization in two ways: It’s not binary, e.g. either in or out. And it’s based potentially on more information about the context. I’d like to add some thoughts from my side to this and explain as well the difference between today’s risk-based authorization and tomorrows context-based authorization.

Risk-based authorization is an approach which has developed mainly in the financial industry. The idea is to observe and analyze user interactions to detect potential attacks and other dangerous situations. If there is a risk, the authorization to access a specific system or specific data within in a system is denied. There are several vendors in this space, including Oracle with their Bharosa acquisition and Arcot Systems.

The idea of context based authorization goes well beyond this, even while there is no hard borderline between vendors of risk-based authorization and the context-based authorization idea. It’s more sort of an evolutionary process. I personally expect that todays vendors in the risk-based authorization space (which sometimes have a some ability for context-based authorization as well) will expand their products towards context-based authorization. I assume that we as well will see some new specialists in the space of context-based authorization. And for sure the key players in the IAM space will enter the market for context-based authorization either with the make or the buy approach, e.g. building it by themselves or acquiring someone. Read the rest of this entry »


Services
© 2014 Martin Kuppinger, KuppingerCole