Another dead body in IT? Or is XACML still alive?

13.05.2013 by Martin Kuppinger

Since my colleague Craig Burton has declared that SAML is dead, it seems to be in vogue among analysts to take the role of the public medical officer and to diagnose the death of standards or even IAM (Identity and Access Management) in general. Admittedly, the latter case was not about diagnosing the death but proposing to kill IAM, but that does not change much. The newest in this series of dead bodies is XACML, according to another Industry Analyst. So we are surrounded by dead corpses now, or maybe by living zombies. But is that really true? My colleague Craig Burton titled his blog – for a very good reason – “SAML is Dead! Long Live SAML!” That is fundamentally different from saying “XACML is dead”.

There are a lot of good answers from experts such as Ian Glazer, Gerry Gebel (OK, he might be a little biased being the President of Axiomatics Americas), or Danny Thorpe.

I am clearly not suspicious being the enthusiastic XACML evangelist wearing blinders. Just ask some of the Axiomatics guys – we had many controversial discussions over the years. However, for me it is clear that neither Dynamic Authorization Management in general nor XACML in particular are dead.

What puzzled me most in this blog post was that part of the initial sentence:

XACML … is largely dead or will be transformed into access control

OK, “access control”. XACML is access control. Access control is everything around authentication and authorization. So what does this mean? I just do not understand that sentence, sorry. XACML is a part of the overall Access Control story.

From my perspective, the two most important concepts within access control are Dynamic Authorization Management and Risk-/Context-Based Access Control (i.e. both Authentication and Authorization). The latter only will work with Dynamic Authorization Management in place. When we know about the context and the risk and make authorization decisions based on that, then we need systems that externalize authorization and rely on rules that can take the context into account.

The challenge with Dynamic Authorization Management, i.e. technologies implemented in a variety of products such as the Axiomatics Policy Server, the Oracle Entitlements Server, the IBM Security Policy Manager, Quest APS, and many others, is that it requires changes in both application code and the mindset of software developers and architects. That is a long journey. On the other hand we see some increase in acceptance and use of such technologies. Notably, Dynamic Authorization Management is not new. You will find such concepts dating back to the mid ‘70s in mainframe environments, and IBM’s good old RACF can be consider an early example for that.

You still can argue that Dynamic Authorization Management is alive but XACML as the most important standard around it is dead. There are good arguments against that, and I will not repeat what the others mentioned above have said. You might discuss where to use XACML and where to rely on proprietary technology. However, do you really want to lock in your entire application landscape into a proprietary Dynamic Authorization Management technology of a single vendor? That would be a nightmare. You need to isolate your applications from the Dynamic Authorization Management system in use, and a standard helps in doing that. Just think about being locked into proprietary interfaces for all of your applications using a specific Dynamic Authorization Management system for the next 30, 40 or more years.

XACML even is the better choice for COTS applications. They can rely on a standard, instead of every vendor building proprietary connectors. Most vendors will do that for Microsoft SharePoint, because SharePoint is so important. But that is the exception, not the rule. And deducing from the fact that vendors support SharePoint with proprietary interfaces (instead of using XACML) that XACML is dead is just a wrong deduction. The problem in that case is not XACML but the SharePoint security model that clearly is not the best I have ever seen (to say the least). XACML is of value. Standards are of value. And I believe you would need much better reasons to diagnose the death of standards.

To learn more about the real trends in IAM, IAG, Cloud Security, and many other topics, just visit the EIC 2013 that starts on Tuesday, May 14th.


SAML, SCIM – and what about authorization?

16.11.2011 by Martin Kuppinger

Cloud Computing is just another delivery model for IT services. However, due to the specifics of cloud services like multi-tenancy and many others, requirements sometimes are even higher than for on-premise services. One of these requirements in well-architected IT environments and for well-architected applications is the ability to externalize security. That includes relying on external directories for administering and authenticating users, e.g. on Identity Providers. It might include the capability of “cloud provisioning”, e.g. receiving changes of users – even while I clearly favor federation as loosely coupled approach over provisioning. It should include the support for external logs, event monitoring, and so on – unfortunately that appears to be a topic where noone is really working on.

And it should include the capability of managing authorizations in cloud services based on centrally (on-premise or using a cloud service – but centrally and not per cloud service!) managed policies. There is limited value in federating users and than doing all the administration work per cloud service using the cloud service’s proprietary management GUIs or APIs. However, authorization is where the problem really starts.

There is a standard for distributed, dynamic authorization management out there: XACML, the eXtensible Access Control Markup Language. It allows to describe the rules. It allows to work with different repositories for identity information (PIPs, Policy Information Points) and other information required for authorizations, it provides interfaces to custom and standard applications, and so on. However, I haven’t seen XACML in the cloud until now. Unfortunately, I also haven’t seen any real alternative to XACML.

Some might claim that SAML might do that job. There is the SAML Authorization Decision Query as part of the SAML 2.0 standard. But that leads pretty quickly to SAML/XACML interoperability and things like the SAML 2.0 profile of XACML. In fact, if it is about having a consistent set of policies expressed in a common standard, XACML is what we need. We need to define and manage these policies consistently per organization, not per service. Services should request authorization decisions – at least in an ideal world. However, when looking at the cloud, there comes another aspect into play: Performance. Performance is a general issue when externalizing authorization decisions. For cloud services which have to ask many different authorization “engines”, it is an even bigger issue. And there is the issue of latency, which is a factor in cloud environments due to the geographical distances you might find there.

Thus, while XACML is fine for defining policies, the interesting question is: Should cloud services ask external authorization engines per authorization decision? Or is it the better way to update the relevant XACML policies at the cloud service and do authorization decisions there? However, then we will still need a way for efficiently accessing the PIPs for other attributes required to perform the authorization decision.

I don’t have the full answer. However I’m convinced that XACML is a key element for authorization in the cloud, given that it is the standard for externalizing authorization decisions. But it might need some enhancements to optimally work for cloud security as well. It definitely will need improved security architectures for cloud services themselves to externalize authorization decisions and to rely on centrally managed policies. And it definitely needs some thinking about the overall security architecture for cloud services. So I’m looking forward to comments on this post – maybe I’ve missed something and everything is there; maybe this initiates some enhancements to standards. I don’t know but I’m really curious.


Persons, Identities, Users, Accounts

10.08.2011 by Martin Kuppinger

Is there a mismatch between the reality in organizations and the implementations of at least several of the Identity Provisioning and Access Governance solutions when it comes to the representation of physical persons in IT? To me it appears that there is a mismatch.

The reality in all large organizations I know is that the real world is sort of 3-tiered:

  • There is a physical person – let’s call him Mr. X
  • Mr. X can act in very different contexts. You might call them roles or digital identities, however all of these terms are overloaded with meanings. I’ll give three examples for that. 1. Mr. X might be an employee of an insurance company and a freelance insurance broker for the insurance company at the same time. 2. Mr. X might be an employee of a bank and a customer. 3. Mr. X might be the managing director of both company ABC, Inc. and DEF, Inc., which both are owned by XYZ, Ltd where he is employed as well.
  • In each of these contexts, Mr. X might have more than one account. If he acts as external freelance insurance broker or customer, that might only be one account. If he is the managing director of some corporations within a group, he might have different Active Directory accounts, different RACF accounts, different SAP accounts, and so on.

You might argue that these are exceptions. However, being a customer of the employing company isn’t an exception in many organizations. And, by the way: A good and valid model has to support not only a standard approach but the exceptions as well. With other words: There are few situations in which a real-world model isn’t 3-tiered.

And there are good reasons to model the systems according to that. If someone is a customer of a bank and an employee, there are very obvious SoD rules which apply to this. He shouldn’t give loans to himself. If someone is a freelance insurance broker and an employee of the insurance, the same is true. He shouldn’t manage the insurance contracts he is selling. If someone is a customer and and employee, it’s the same again. He shouldn’t give discounts, grant consumer loans, or just change the delivery queue.

However, several tools follow a 2-tiered approach. They know for example an “identity” and “accounts” or “users” which are associated with the identity. If someone has more than one such identity, the problems begin. In some cases, it is very easy to adopt the object model. In others, you have to find workarounds like mapping the unique IDs of these identities into the other identities, which then might require a lot of additional code and is error-prone.

From my perspective, supporting a 3-tiered model out-of-the-box, with

  • Persons
  • Context, Identities,… (whatever term you prefer)
  • Users (in specific systems), accounts,… (again – choose your term)

is mandatory to reflect the reality in organizations and to support the business requirements – especially when it comes to SoD policies. If you don’t need three tiers, it is easy to just use two of them. But if your tool supports only two tiers out-of-the-box, it might become a tough task to implement your real-world model. Looking at that point is, from my perspective, one of the most critical aspects when it comes to technology decisions.


Database Security – a strategic perspective

31.03.2011 by Martin Kuppinger

In the recent months I’ve done a lot of research around database security, talking with vendors like Oracle, IBM (Guardium), Sentrigo (now McAfee), Imperva, Bitkoo, and some others as well as with several end user organizations who either are using database security products or evaluating those technologies.

When looking at the market it is very important to understand that it is not a homogeneous market. The different solutions range from firewalls to specific tools for label security or data masking. Some are tightly integrated with databases, others are non-intrusive. I will provide a broad overview in an upcoming research note which covers the entire database security market and the vendors therein.

But before selecting the right vendor and the right tool for your database environment, you should ask and answer another question: How does this fit into your overall IT security strategy and implementation? I’m not a friend of point solutions in security. Solving one problem without looking at all the other problems doesn’t necessarily increase the overall level of security achieved. It might give a better feeling, but frequently there is still too much attack surface left.

Just think about securing your databases with a firewall. Some of the attack surfaces left are:

  • Security issues in the applications which access data in the databases
  • Administrative actions
  • All actions performed locally at the database server
  • Copying or deleting the database with administrative access at the operating system level

And that’s just a short and incomplete list. From a strategic perspective, you have to look at how to secure the stack. Recently I’ve been at a customer who discussed about where to best start securing his apps. As a quick start, I proposed to him to build a simple spreadsheet with his (defined) 30 most critical apps and the stack these apps are using – including operating system, application platforms, hypervisors, and for sure the databases. That simple spreadsheet will give him an impression of the dependencies he has to keep in mind – it visualizes that security isn’t about point solutions.

I don’t say you should not invest in database security – but that should be one element of security. Thus, database security has to be put into context.

One interesting aspect within that are database firewalls. There are some firewalls out there, inspecting packets for SQL traffic based on policies. However, when inspecting packets – why not for everything? CIFS/SMB traffic to file servers? Web service security? That would allow to apply a consistent set of policies wherever it is appropriate. It would provide a consistent layer of security. For sure that won’t solve all problems, but the advantage in contrast to having a “DB firewall”, a “Sharepoint firewall”, a “CIFS/SMB firewall”, and so on is obvious. Another example is around privileged user (account, identity, access) management, e.g PxM. That is important for database management systems, but it is important for other types of systems (apps, operating system, hypervisors, network appliances,…) as well. I’d opt for a solution which covers all.

For sure there are as well many database specific aspects of security, like data masking and others. And given that there isn’t the “multi-purpose firewall” or other solutions which cover everything out there, it is about using several solutions. There is also some good reason for specialized tools – easier to implement, easier to manage, more specific features. However, they should be used as part of an overall strategy, not as isolated point solutions. Customers have to look at it from that perspective – and vendors should move forward to provide more integrated solutions over time.

Good security is achieved by strategy, not by tactics.

EIC 2011: Munich, May 10th to 13th – the place to be for IAM, GRC, Cloud Security, Database Security, and more…


From technology to business – the shift in Identity and Access Management

10.02.2011 by Martin Kuppinger

Being involved in a lot of advisory projects at end user organizations for some years now, I’d like to share some of the fundamental changes I observe. There is always a gap between what analysts like us, KuppingerCole, predict and what is done in reality. Thus it is always great to observe that things we’ve predicted and proposed are becoming reality. So what has changed over the course of the last years – trends becoming reality:

  • Access and Identity Management: Back in 2008, I’ve blogged about the relation of the terms “access” and “identity”, the latter being much more difficult to explain. Today, the clear focus is on access controls, they are in focus.
  • More flexible architectures: Some time ago, the idea was to have one provisioning system which covers all. Today more flexible architectures like described in one of my research notes become reality. Access Governance on top of several provisioning system allowing to protect existing investments and to move forward in smaller steps are increasingly common – and the increased maturity of Access Governance tools is the foundation to do this. Provisioning is increasingly seen as a technology layer below such integration layers (not necessarily Access Governance). And so on…
  • Access Governance on top, doing things more business centric: A consequence of this is that companies focus much more on the business user and their requests for access (yes, for access, not mainly for identities). This isn’t entirely new but the way IT interacts with business has changed over time.
  • Integration with service request approaches (not service desk, like BMC believes): Another tendency is to integrate access and identity requests with other service requests, either in the IAM/Access Governance tools (like in Quest One ActiveEntry or through Avatier AIMS, to name just two) or in service catalogs. However the interface has to be fore business users, not the IT – e.g. not the service desk itself. Service desks are as well increasingly part of the integration, within the more distributed architectures mentioned above, but for the manual part of fulfillment in systems which aren’t connected through a provisioning system.
  • Bodies of rules, policies,…: The, from my perspective, most important change is that more and more projects start with the definition of  “bodies of rules”, policies, concepts – and not with the selection of a technology. That definitely makes sense: You don’t start building a house by buying stones, you start with blueprints.

Two more (amongst others) trends increasingly becoming reality are

  • Externalization of security out of applications in a standardized way, based on XACML and other approaches (and yes, there are real world projects out there on this)
  • Hybrid cloud IAM and Access Governance – how to deal with mixed environments

Overall there is a clear shift of how IAM is done. And this change will continue, with the upcoming integration of Access Governance and other IT GRC approaches into enterprise-wide GRC concepts.

To learn more about the trends as well as the best practices don’t miss EIC 2011, where thought leadership and best practices come together.


Why we need claims in Windows

21.04.2010 by Martin Kuppinger

Microsoft has introduced the concept of claims-based securitywith it’s “Geneva” project. Claims are sort of attributes which are provided by identity providers in the form of tokens and consumed by applications. In fact they are one way to make federation easier and more user centric. “Geneva” provides the tools at all levels to work with claims. The concept of claims is used by some other groups at Microsoft and we probably will see several Microsoft applications with support for claims within the next months.

However, the biggest impact might be on the Windows operating system itself. Claims could make that much more flexible from a security management perspective than today’s mainly ACL-based security model. ACLs are too static and too complex in management to really fulfill the customer needs today. Not only in Windows, but in other operating systems as well. If you think about an operating system which consists of services (Service Providers, Relying Parties) and relies on Identity Providers to provide claims, the entire Security Management can become much more efficient. Based on Policies, using dynamically provided claims. Authorization might be done by the services based on policies and claims or by specialized authorization engines within the operating systems on behalf of the services (the latter not yet being part of “Geneva”).

It is, without any doubt, not that easy to perform such a fundamental change. ACLs are at least somewhat understood, claims are new. There has to be a migration path and compatibility. But if we look at all the options we have, claims appear to be the most promising concept for the future security at the operating system level. One interesting side effect is that the same policies might be applied to other elements in the security infrastructure as well – external access management tools and so on.

Meet me at European Identity Conference 2010 and Cloud 2010 Conference, Munich, May 4th to 7th.


Dynamic authorization management

18.03.2009 by Martin Kuppinger

Authorization management is becoming increasingly popular. But there are, in fact, two very different approaches:

  • Static authorization management, where changes are provisioned to the target systems.
  • Dynamic authorization management, where authorization decisions are externalized to authorization engines at runtime.

The latter require changes to the applications, but they lead to the externalization of authentication and authorization (and hopefully as well auditing) from applications. Everything can be easily managed from outside of the applications.

Whilst static authorization management is provided by provisioning systems (at the more technical level) and by several GRC vendors (from a business control perspective), vendors of solutions for dynamic authorization management are still relatively rare and, besides this, in most cases relatively small. Besides Oracle with their Entitlements Server and, to some degree, CA with their Embedded Entitlements Manager, vendors include companies like Bitkoo or Engiweb, to name some of the two which are particularly interesting. And, for sure, Microsoft’s approach for claims leads in that direction – but at least in the current approach, authorization decisions aren’t externalized yet.

From my perspective, externalizing these decisions from applications definitely makes sense. Policies can be managed centrally, changes are effective immediately, and application developers don’t have to think much about security. They just rely on external decisions. In fact, things are moved from coding not only to deployment, but to runtime.

There are three challenges:

  • The authorization engines have to be fast
  • They have to be integratable with other IAM/GRC tools for a consistent management
  • The applications have to be adopted to a specific solution

The first part is just an architecture and engineering task which has been solved by several vendors. The second requires, from my perspective, standards for the description and exchange of policies which are still widely missing. The third part could also be addressed by standards. That would give customers the choice between different authorization engines. As long as these standards are missing, customers should, with respect to the last bullet point, focus on implementations which require few changes in applications to minimize the risks of vendor lock-in. On the other hand, the advantages of such approaches are significant – and vendors like Bitkoo and Engiweb are succesful because of that fact.

From my perspective, companies should start looking at these approaches today and really start externalizing security out of the code.

By the way: We’ve given our European Identity Award in the category best innovation in 2008 to some of the vendors mentioned above. Attend European Identity Conference 2009 and learn, amongst many other things, who will be awarded as innovator this year.

The need for standards


Again: Identity Data Theft

22.01.2009 by Martin Kuppinger

Yesterday, news spread about the theft of millions of credit card dates at the US company Heartland Payment Systems, based in Princeton, New Jersey. Even while that might be one of the largest cases of data theft in the credit card industry, it wouldn’t be that interesting that I’d blog about. The – from my perspective – really interesting point is, from what I’ve read in the news, the way the attack has been performed.

The information sent is encrypted but has to be decrypted to work with it. The attackers grabbed the then unencrypted information. Surprise? Not really. The problem with security is that virtually any approach is incomplete – and thus inherently insecure. Examples?

  • Passwords are frequently encrypted via SSL when sent to a eCommerce website but then decrypted and compared – and often they are even stored unencrypted and sent back in case of a lost password. I’ve just seen this again recently, when I received my password in cleartext via eMail.
  • Data is encrypted on a specific type of device using some DLP (Data Loss Prevention) technology. Once delivered, it is decrypted – and might be mailed as an attachment.
  • Access Control Lists are enforced to provide security for data at file servers – but they are sent to the client unencrypted and the user might store an unshielded copy (or mail it or do something else).

These are just three examples – of hundreds or thousands. Another was discussed in a Kuppinger Cole Webinar yesterday, where we talked about “service oriented security”, e.g. application security infrastructures, SOA security, and so on. The question was about the security between the applications and the security systems (and eventually the security systems themselves). That is a good question. Often there are security holes somewhere at the center of the security system. SSL itself isn’t the answer. In that case it is about a consistent security approach. Unfortunately, even many IAM and GRC applications don’t provide a really sophisticated security model.

Another interesting point is that there are always other potential security holes. Trojans which grab keystrokes are one example, the man behind you reading the information at your screen is another one. Some of these problems can be adressed, for example with external keyboards for entering sensitive information in eBanking. Others will be always there.

There is no easy solution to these issues. Information Rights Management will help to address many of these problems – I’ve blogged about the need for IRM some time ago. But IRM won’t solve everything. Information has to be processed, thus the systems which process data are extremly sensitive (like in the case I’ve started with). And a business document in an ERP system is, finally, stored in fragments within a database.

From my perspective, the most important point is to work on an authorization strategy (or access strategy) which covers all aspects. Any investment in DLP is at risk as long as it isn’t part of the bigger picture. Point solutions are perfect for masquerading the real security problems, but they don’t really solve them. An overall strategy which identifies the security holes and which tries to use a limited number of well linked technologies is mandatory to minimize security risks. That strategy has to include everything, from the firewall and SSL-secured connections to IRM and the security of backend systems. That is no easy task, especially because there are frequently many different parties involved which all claim that they have found the holy grail for enforcing security. But it can be done – and it will save you a lot of money by avoiding investments in security technology which don’t really solve your problems.

For the ones of you capable of reading German: Please participate in this survey. That fits well to the topic of this blog post.


A more efficient approach for managing file server ACLs?

24.10.2008 by Martin Kuppinger

Have you ever heard about Rohati? You should have. They are definitely amongst my list of really interesting vendors in the Identity and Access Management market and the overall security market. And they are on the way to provide a real alternative to todays complex, cost-intensive and still error-prone approach for managing access controls at file servers. They don’t end there but provide as well interesting features for controlling the access to web applications – but the part I like most is the one around CIFS/SMB (Common Internet File System/Server Message Blocks) and access control for file systems.

Rohati is a start-up which provides appliances to enforce access controls (or authorizations) at the network level. They are one of the currently few vendors in the new segment of “network based authorization management” or “network based entilement management”. All the traffic is analyzed by their appliances. This analysis supports every layer up to layer 7, e.g. the application layer. The CIFS support will be ready soon, currently being in beta.

Enforcing access controls at that level provides several advantages:

  • At that level, one consistent layer of policy definition and enforcement can be defined.
  • Changes in policies are easy to implement. It is, for example, pretty easy to secure some shares with financial statements in lock-up periods. That is by far easier to implement and enforce with the Rohati policy-based approach than at the ACL level of Windows servers, where it would require two explicit changes of the ACLs at fixed dates.
  • There is one point of control, instead of different ACLs at different servers.
  • Windows and Samba servers can be managed together.

The Rohati appliance acts in the context of the user, e.g. it requires authentication. But Rohati supports for example Kerberos, thus the authentication in Windows environments works seamless in the background, transparent to the user.

Today, the management of ACLs as well at the file system level as at the share level often is a nightmare – for both administrators and auditors. Managing ACLs consistently, according to defined business rules, across many servers is pretty complex and definitely error-prone. With the Rohati approach, there could be a layer in front instead of the system-level management of ACLs.

For sure, the information still has to be shielded for the ones who access servers locally. But all network access could be controlled centrally.

Usually, I’m no friend of solutions which operate as an additional layer in front of existing systems. But in that case, I think it is really worth to have a look at. Whilst Rohati in enforcing authorizations for web applications is more or less competitive to existing software-based Web Access management solutions, the CIFS support provides entirely new options for authorization. That approach might take a lot of burden from system administrators and help to avoid errors in authorization management.

I even could imagine that such a policy-based, centralized model for authorization management might significantly influence what Microsoft is doing at the operating system level for a next-generation windows server and file system. There are some lessons Microsoft could learn from Rohati and adopt at the OS and software level.


Access or Identity? Or Authorization? Or Entitlements?

24.10.2008 by Martin Kuppinger

Recently, I had several discussions around terms like Access Management, Authorization, and Entitlements. And I thought about what is in the center – is it the identity or is it access management? Some weeks ago I mentioned in my blog that Hassan Maad, COO of Evidian, has stated that, from his experience, customers understand access while they have difficulties with the term identity. And when I go back some two years, there has been an intensive discussion of the so called “Identity Gang” about the term “identity”.

In fact, the management of access is the core business requirement. That is about authorizing access, it is about being entitled to do something. Thus, access management, authorization management, and entitlement management are terms which are used in the same context, with slight differences between them.

But: It is not only about allowing access, or authorization, or entitling. The questions are: WHO is granted access?  WHO is authorized to do something? WHO has which entitlements? There is always the “who”, the identity. With other words: These concepts are tightly coupled together. Authentication (proving the who) and Authorization (granting or denying access) can’t be separated. Which, by the way, becomes obvious when looking at the concept of federation.

And there are several other import aspects of the identity, including the approach of understanding core business objects as identities (and vice versa).

However, the concept of the identity is more theoretical and more complex than access, authorization, entitlements. Thus, it might be better to talk about “Identity and Access Management” instead of “Identity Management” – especially, because there are some technologies which are more related to identities and others more to access. At least until someone creates a better term which is understood by everyone and which replaces “Identity and Access Management”. GRC isn’t that term. But maybe someone has a good idea!?


Services
© 2014 Martin Kuppinger, KuppingerCole