Quest acquires Bitkoo – another step for Quest to play with the big boys

19.12.2011 by Martin Kuppinger

During the past few years, Quest has acquired several other IAM vendors: Völcker Informatik (Provisioning and Access Governance), Symlabs (Virtual Directory Services), Vintela (Linux/UNIX Authentication and Integration), and e-DMZ (Privileged User/Account Management) are just some examples of this shopping spree. The newest addition to the Quest portfolio is Bitkoo, a vendor in the  Dynamic Authorization Management space (http://jacksonshaw.blogspot.com/2011/12/quest-acquires-bitkoo-and-dives-into.html).

This acquisition comes as no surprise given that Dynamic Authorization Management is one of the most interesting amongst the emerging segments within the IAM market. Dynamic Authorization Management is about externalizing authorization decisions from single applications and performing them against centralized backend systems, based on centralized rules. Instead of hard-coding security into applications and instead of having to maintain authorization rules in a lot of different applications, Dynamic Authorization Management systems build the backend for such decisions.

Dynamic Authorization Management thus is a core piece of identity and security services and “Application Security Infrastructures”, i.e. the set of services applications rely on when externalizing identity and security. Such services include administration (for example using central directory services), authentication (best based on versatile, context-/risk-based authentication), authorization (Dynamic Authorization Management), and auditing/alerting. The latter is sort of the missing piece, and in that area there is a lack of standards. But that is a topic I’ll cover in another post.

So Quest has acquired Bitkoo. That is not surprising given that Bitkoo fits well into the Windows-centric strategy of Quest. It adds to the portfolio, making Quest one of the vendors with a comprehensive portfolio of IAM solutions. Quest is, from the breadth of its portfolio, playing in the same league as the well-known big vendors in that space like CA, IBM, and Oracle (which, by the way, all have something to offer around Dynamic Authorization Management). Quest has shown a clear strategy in acquiring other vendors over the past years. Now it’s up to Quest to tell this message to the world, proving that they are more than the corner store selling a mish-mosh of tools for administrators. Quest has another portfolio now – and that makes them a really interesting competitor in that market.

This acquisition will most likely also increase the attention on Axiomatics, the most prominent specialized vendor left in the market of Dynamic Authorization Management. Axiomatics is on one hand the independent alternative – and on the other hand the obvious acquisition target number one now that Bitkoo is part of Quest.


Recertification in dynamic authorization systems

28.07.2011 by Martin Kuppinger

Access Governance tools are becoming standard in IAM infrastructures. However, they mainly focus on “static” access controls, e.g. the entitlements granted to a user based on roles and other paradigms. Recertification is supported by these tools, and the solutions are maturing quickly. Thus, that part of Access Governance is easy to solve.

However, the next wave is coming with the increasing success of tools which are commonly called Entitlement Servers or Policy Servers. I tend to call them Dynamic Authorization Systems because they authorize based on rule sets and attributes at runtime. While the rules are set, the attributes are changing. I’m a strong believer in these tools and in XACML als the underlying standard for communication between the different modules and in heterogeneous environments.

But: What about Access Governance for these environments? Some of the Access Governance tools support that to some degree, allowing to pre-evaluate some business rules which use defined roles or attributes. However, many rules – especially business rules like “users of the life insurance backoffice with the role xxx and the defined constraint for signing payments up to 50,000 € are allowed to sign that type of claim” are out of scope. There is some support for testing such rules for example provided by Axiomatics.

However, I don’t see a solution which provides integrated Access Governance for all types of entitlements. Given that Dynamic Authorization Systems gain momentum, its just a matter of time until auditors will ask for such solutions. These solutions should, like modern Access Governance tools, support the lifecycle management for the policies including approvals, auditing and analysis, and the recertification of such rules. That is more complex than what is done today. But, without any doubt, we will need this soon.

It will be interesting to observe who becomes the leader in that market. The vendors in the market of Dynamic Authorization Systems themselves? The Access Governance vendors? New startups?

By the way: The topic isn’t that new – look here.


Beyond LDAP – have a look at system.identity

20.06.2010 by Martin Kuppinger

LDAP (Lightweight Directory Access Protocol) is well established. It is the foundation for today’s Directory Services, which support LDAP as a protocol and which usually build their data structure on the associated LDAP schema. There are many interfaces for developers to use LDAP, from the LDAP C API to high-level interfaces for many programming environments.

Even while LDAP is well established, it is somewhat limited. There are several restrictions – two important ones are:

  • The structure of LDAP is (more or less) hierarchical. There is one basic structure for containers – and linking leaf objects (think about the association of users and groups) is somewhat limited. That structure is a heritage of X.500, from which LDAP is derived – with LDAP originally being the lightweight version of the DAP (Directory Access Protocol) protocol. X.500 was constructed by telcos for telcos, e.g. with respect to their specific needs of structuring information. However anyone who ever has thought about structuring Novell’s eDirectory or Microsoft’s Active Directory knows that there is frequently more than one hierarchy, for example the location and the organizational structure. The strict hierarchy of LDAP is an inhibitor for several use cases.
  • LDAP is still focused on the specific, single directory. It doesn’t address the need of storing parts of the information in fundamentally different stores. But the same piece of information might be found locally on a notebook, in a network directory like Active Directory, in a corporate directory and so on. How to deal with that? How to use the same information across multiple systems, exchange it, associate usage policies, and so on? That is out-of-scope for LDAP.

I could extend the list – but it is not about the limitations of LDAP. LDAP has done a great job for years but there is obviously the need to do the next big step. An interesting foundation for that next big step comes from Kim Cameron, Chief Identity Architect at Microsoft. He has developed a schema which he calls system.identity. There hasn’t been much noise around before. There is a stream from last years Microsoft PDC, there is little information at the MSDN plus a blog post, there is the Keynote from this year’s European Identity Conference. But it is worth to have a look at that. The approach of system.identity is to define a flexible schema for identity-related information which can cover everything – from local devices to enterprise- and internet-style directories, from internal users to customers and device identities, including all the policies. It is, from my perspective, a very good start for the evolution (compatibility to LDAP is covered) well beyond LDAP and today’s directories.

I’ve put the concept under a stress test in a customer workshop these days. The customer is thinking about a corporate directory. Most people there are not directory guys, but enterprise IT architects. And they definitely liked the path system.identity is showing. It covers their needs much better than the LDAP schema. That proved to me that system.identity is not only for the geeks like me but obviously for the real world. Thus: Have a look at it and start thinking beyond LDAP. The concept of system.identity, despite being early stage, is a very good place to start.


The unsocial side of bad software architecture

25.01.2010 by Martin Kuppinger

Last week, there was the news that the Federal Employment Office of Germany will claim for the return of excessive payments from potentially more than a million so called “Hartz 4″ recipients. What appears to be of political and social relevance, is as well interesting for IT – because it’s about the negative impact of archaic software architecture.

Let’s start with the background. Hartz 4 stands for as well social welfare aid as unemployment aid, named after Peter Hartz, a former Volkswagen member of the board and advisor to the German government about how to change and optimize these aids and insurances. There is a significant number of Hartz 4 recipients. Many of them are either families or single parents. Starting Jan 1st 2010, the child allowance has been increased by 20 € per child and month. However, child allowance is charged against Hartz 4, thus Hartz 4 recipients with childrens shouldn’t benefit from that increase – not that social, isn’t it?

Now the problem arises: Many have received the 20 € (or x times 20 €, depending on the number of children) increase – and now that shall be reclaimed. The Federal Employment Office came up with the explanation that this has been because the short period of time between deciding about the increase of child allowance and the due date. However, there were some weeks in between. Regardless of whether the money will be reclaimed or not (there are interesting legal discussions about), that clearly shows, together with other explanations, that there is an IT issue behind.

That issue is a software where such a change obviously has been to complex to perform in time, in a planned, structured manner. That is, looking at topics like “Software Architecture”, “GRC”, and “Externalization of Security”, pretty interesting – especially from the GRC view on software architecture. Obviously, a change of a business policy couldn’t be transferred to the software just in time. That is a typical GRC issue: Business Policies which lead to complex change process in IT, when code has to be adopted to these changes. That leads to issues like time-to-market or, in that case, has a significant social impact. From a GRC perspective, that is an issue – a governance issue IT management has to deal with. IT is a software architecture issue, because such problems occur only due to a non-policy-aware software architecture and due to hard-coding things which shouldn’t be hardcoded. Think about a policy-controlled software and defined request/approval workflows for such fundamental changes. That isn’t hard to architect, it should just be good practice. It would lead to applications which are acceptable from a GRC point of view (with GRC being much more than security…). It were secure. And, most presumably such a software would rely on policies and thus externalization as well for security, especially access controls.

There is little reason to assume that the Federal Employment Office has a software in place that meets these fundamentals of good software architecture. The real bad thing, besides all the unnecessary costs associated with such archaic software, is the negative social impact of that.


XACML – why it is so important

22.10.2009 by Martin Kuppinger

XACML (eXtensible Access Control Markup Language) gains an increasing attention as one of the core standards in the field of information security and thus IT security. Whilst standards like SAML (Security Assertion Markup Language) address the problem of authentication, XACML is about authorization – the more complex threat. XACML allows the definition and exchange of authorization policies in a heterogeneous environment. Whether it is about cloud security and controlling the authorization policies of cloud services or about SOA security for internal applications: XACML supports the authorization management in such use cases.

However, there is no such thing as a free lunch: XACML not only tools like XML/SOA Security Gateways which support that standard or cloud services with XACML support. There are two other important aspects:

  • XACML in fact means a shift from a more static security approach like with ACLs (Access Control Lists) towards a dynamic approach, based on policies which are applied at runtime. These dynamic security concepts are more difficult to understand, to recertify, to audit and analyze in their real-world implications. Thus, the use of XACML requires not only the right tools but well-thought concepts for policy creation and management.
  • XACML is just a foundation to express policies. Within a use case, policy concepts have to be defined. Over time, there should be higher level standards or defined use cases building on XACML and focusing on a standardization of the content of these policies.

Anyway, XACML is very useful. One of the most interesting areas for XACML is SOA Security. Currently, many SOA-based applications still lack a valid concept for authorization. Authorization still frequently is built into these applications. XACML can provide the policies to externalize the authorization management and thus add flexibility to SOA-based applications.

Overall, it is – from my perspective – definitely worth to spend some time exploiting the potentials for XACML to improve the security of systems and applications. There are many areas where XACML can be used successfully today. However, like with any emerging technology, there will be a lot of improvements in the managing and consuming applications (and, hopefully, around the standards ore use cases building on XACML) over the next few years. Thus the step to XACML has to be considered carefully. The good thing is: It is about standards, thus the risk of lock-in isn’t that big.

We will talk more on depth in an upcoming webinar. Register for free!


Dynamic authorization management

18.03.2009 by Martin Kuppinger

Authorization management is becoming increasingly popular. But there are, in fact, two very different approaches:

  • Static authorization management, where changes are provisioned to the target systems.
  • Dynamic authorization management, where authorization decisions are externalized to authorization engines at runtime.

The latter require changes to the applications, but they lead to the externalization of authentication and authorization (and hopefully as well auditing) from applications. Everything can be easily managed from outside of the applications.

Whilst static authorization management is provided by provisioning systems (at the more technical level) and by several GRC vendors (from a business control perspective), vendors of solutions for dynamic authorization management are still relatively rare and, besides this, in most cases relatively small. Besides Oracle with their Entitlements Server and, to some degree, CA with their Embedded Entitlements Manager, vendors include companies like Bitkoo or Engiweb, to name some of the two which are particularly interesting. And, for sure, Microsoft’s approach for claims leads in that direction – but at least in the current approach, authorization decisions aren’t externalized yet.

From my perspective, externalizing these decisions from applications definitely makes sense. Policies can be managed centrally, changes are effective immediately, and application developers don’t have to think much about security. They just rely on external decisions. In fact, things are moved from coding not only to deployment, but to runtime.

There are three challenges:

  • The authorization engines have to be fast
  • They have to be integratable with other IAM/GRC tools for a consistent management
  • The applications have to be adopted to a specific solution

The first part is just an architecture and engineering task which has been solved by several vendors. The second requires, from my perspective, standards for the description and exchange of policies which are still widely missing. The third part could also be addressed by standards. That would give customers the choice between different authorization engines. As long as these standards are missing, customers should, with respect to the last bullet point, focus on implementations which require few changes in applications to minimize the risks of vendor lock-in. On the other hand, the advantages of such approaches are significant – and vendors like Bitkoo and Engiweb are succesful because of that fact.

From my perspective, companies should start looking at these approaches today and really start externalizing security out of the code.

By the way: We’ve given our European Identity Award in the category best innovation in 2008 to some of the vendors mentioned above. Attend European Identity Conference 2009 and learn, amongst many other things, who will be awarded as innovator this year.

The need for standards


IT organizations have to change – for economic reasons!

10.12.2008 by Martin Kuppinger

During the last month’s research I frequently ended up with thinking about IT organizations – as well the organization of IT itself as the IT as part of the overall organizational structure, including the role of the CIO. There is, from my perspective, no doubt that fundamental changes are required.

Let’s start with the IT organization. Early in 2008, we’ve done a survey and report on the topic of “SOA Governance” together with Ernst & Young (the German subsidiary) which we first time presented at EIC 2008 (by the way: EIC 2009 will be again in Munich, May 2009 5th to 8th, hope to meet you there). The most important result was that the main problem of SOA Governance and, as part of it, SOA Security are the missing application security infrastructures, e.g. standardized approaches for securing applications. The reason for that is as well very obvious: Siloed IT organizations. Read the rest of this entry »


Claims, Tokens, End-to-End Security

29.04.2008 by Martin Kuppinger

One of the panels at the recent EIC 2008 on End-to-End Security for SOA applications there was a discussion about whether this target could really be achieved. One comment was that built-in federation awareness in every single web services won’t work with thousands of web services you might have today or in future. The handling of trusts would be too complex, was the argument.

Yes, if you handle every trust separately. No, if there is sort of a trust broker for at least most of the web services which provides a standard trust with no specific configuration per web service. In that case even that concept might work – and federation-enabling web services could be done by the application these services run on.

But it can be done easier, in the context of Web Service Security applications or other approaches. My position is that a web service has to run in the context of the user’s identity. Usually the context will be derived, e.g. a role, a group or something else. A layer like the Web Service Security should be able to work with such a context, which might be provided within a SAML token. But, in general, it might be any type of claim – Kim Cameron’s concept of claim-based security fits in pretty well here.

In fact, the issue can be solved very easy: Take the information in a claim or assertion, transform it to a parameter and invoke the web service based with this parameter. Then the web service can return exactly the information which is relevant (or allowed to see) to the identity the parameter has been derived from. The application infrastructure has just to work as a special type of STS (Security Token Service) which transforms security tokens into parameters for web services.

With this approach, it is as well possible to completely implement the idea of claims into SOA security. The accounting of web services works as well, because the platform from which web services are invoked knows about the identity (or something derived from), because it knows the claim or assertion. And the web service itself can be fully identity- and federation-ignorant.

In fact, there is no reason not to implement a real end-to-end security, either with Federation and an efficient trust handling or with a claims-/assertion-/parameter-based approach like described.


Posted in Application Security Infrastructure, SOA | Comments Off

The shortcomings of common SOA security approaches

26.11.2007 by Martin Kuppinger

These days I have written a report on the relationship between IAM (Identity and Access Management) and SOA (Service oriented Architecture/Applications). One major aspect of this relationship is around end-to-end-security, e.g. securing the interaction of a user with an application (and the application which implements a business process) up to the backend systems like databases.

That is inevitable because using a service in the context of an user identity or an user role is the only way for consistent, externalized security instead of coded security where some return of a service is filtered by the application depending on the user’s role. Coded security is contradictory to compliance, obviously. It’s expensive in terms of coding and auditing. Thus, it doesn’t make sense.

On the other the most common approaches for web service security are constructed the same way as web access management solutions: Building a layer in front of the services which uses policies to decide how services are used. That includes some part of authorization and sometimes authentication. The problem is: Using such an approach means that there is definitely no end-to-end-security. From my point of view, there is no alternative to federation to transport claims down to the service level. That is the only approach for real end-to-end-security and thus for applications which are architected to fulfill the increasing compliance requirements.


Proofing the need for an application security infrastructure (budget)

15.11.2007 by Martin Kuppinger

One of the emerging topics in the broader IAM space integrates GRC and Identity Management: Identity Risk Management, including aspects like Identity Risk Metrics. Identity Risk Metrics are used to measure specific aspects of Identity Management. These metrics can be mapped to risks and thus serve as a means to detect and, in the next step, reduce risks. Such metrics can be defined in many areas.

May be the most interesting are Application Risk Metrics – in the context of digital identities. Elements of this category are things like

  • Usage of central identity stores (instead of application specific identity stores)
  • Sensitive attributes in decentralized identity stores
  • Sensitivity of the application and its data
  • Supported authentication mechanisms and their strength
  • Number of user accounts
  • Encrypted storage of passwords
  • and many others…

The analysis of these Metrics automatically leads to a clear view on the level of centralization of Identity Management and, combined with the risk view, to a clear rating of risks which exist due to decentralized user management on the application level and the lack of an application security infrastructure.

Measuring these Metrics can clearly lead in more management support for building application security infrastructures and changing the way security is implemented in applications. It is not very difficult to do this sort of analysis. It doesn’t need a specific Risk Management software, it is just about identifying the applications (which is the hardest part) and counting – and may be some analysis in Excel. And it is about mapping the result to defined risks and to provide an answer on the question of “how to reduce  the risk”. The answer is quite obvious – it is the approach of application security infrastructures.

And that is just one example of what you can do with Identity Risk Metrics.


Services
© 2014 Martin Kuppinger, KuppingerCole