18.03.2009 by Martin Kuppinger
Authorization management is becoming increasingly popular. But there are, in fact, two very different approaches:
- Static authorization management, where changes are provisioned to the target systems.
- Dynamic authorization management, where authorization decisions are externalized to authorization engines at runtime.
The latter require changes to the applications, but they lead to the externalization of authentication and authorization (and hopefully as well auditing) from applications. Everything can be easily managed from outside of the applications.
Whilst static authorization management is provided by provisioning systems (at the more technical level) and by several GRC vendors (from a business control perspective), vendors of solutions for dynamic authorization management are still relatively rare and, besides this, in most cases relatively small. Besides Oracle with their Entitlements Server and, to some degree, CA with their Embedded Entitlements Manager, vendors include companies like Bitkoo or Engiweb, to name some of the two which are particularly interesting. And, for sure, Microsoft’s approach for claims leads in that direction – but at least in the current approach, authorization decisions aren’t externalized yet.
From my perspective, externalizing these decisions from applications definitely makes sense. Policies can be managed centrally, changes are effective immediately, and application developers don’t have to think much about security. They just rely on external decisions. In fact, things are moved from coding not only to deployment, but to runtime.
There are three challenges:
- The authorization engines have to be fast
- They have to be integratable with other IAM/GRC tools for a consistent management
- The applications have to be adopted to a specific solution
The first part is just an architecture and engineering task which has been solved by several vendors. The second requires, from my perspective, standards for the description and exchange of policies which are still widely missing. The third part could also be addressed by standards. That would give customers the choice between different authorization engines. As long as these standards are missing, customers should, with respect to the last bullet point, focus on implementations which require few changes in applications to minimize the risks of vendor lock-in. On the other hand, the advantages of such approaches are significant – and vendors like Bitkoo and Engiweb are succesful because of that fact.
From my perspective, companies should start looking at these approaches today and really start externalizing security out of the code.
By the way: We’ve given our European Identity Award in the category best innovation in 2008 to some of the vendors mentioned above. Attend European Identity Conference 2009 and learn, amongst many other things, who will be awarded as innovator this year.
The need for standards
10.12.2008 by Martin Kuppinger
During the last month’s research I frequently ended up with thinking about IT organizations – as well the organization of IT itself as the IT as part of the overall organizational structure, including the role of the CIO. There is, from my perspective, no doubt that fundamental changes are required.
Let’s start with the IT organization. Early in 2008, we’ve done a survey and report on the topic of “SOA Governance” together with Ernst & Young (the German subsidiary) which we first time presented at EIC 2008 (by the way: EIC 2009 will be again in Munich, May 2009 5th to 8th, hope to meet you there). The most important result was that the main problem of SOA Governance and, as part of it, SOA Security are the missing application security infrastructures, e.g. standardized approaches for securing applications. The reason for that is as well very obvious: Siloed IT organizations. Read the rest of this entry »
13.06.2008 by Martin Kuppinger
These days I’ve read some entries in the Beteo blog, a blog provided by a swiss software and consulting company which is somewhere in between SOA and BSM – or BTO, the term they tend to use due to some affinity to HP. The interesting thing is that Beteo not only claims but proves that Service Management principles and tools which are commonly used more in the IT Infrastructure Management can be applied to the field of Software Change Management as well. Beteo, a company I’m in contact with since they’ve been founded (and I have been in contact even with their predecessor), uses this concept with success especially in SAP environments.
That leads to the obvious conclusion: There should be a much more common service understanding. There should be one BSM approach on the upper layer. BSM, as real business service management, should really address the business aspects like
- Defining services from a business point of view – like “manage a contract” including storage, access rights,…
- Mapping these business services to IT services
- Manage these services from a business perspective, e.g. accounting, controlling (do we need these services really?),…
The next layer are IT services, e.g. the more technical services IT provides to deliver a business service. These services can be managed with ITIL principles and – at least to some degree – with today’s so called BSM tools.
Whether the mapping of IT services to the IT implementations of business processes is part of the IT service layer or the business service layer is a matter of definition. I tend to place the description of business process at the business service layer and the implementation of business processes in IT – and thus, the relationship of these processes with IT services – at the IT services layer.
Anyhow, there is a layer below for the different types of IT services. Today, BSM focuses mainly on IT infrastructure services and provides mainly an ITISM (IT Infrastructure Service Management) – and not an ITSM (IT Service Management) or a real BSM (Business Service Management).
Besides the IT Infrastructure Services we have IT Application Services. These services tend to be more granular, down to web services and so on.
But regardless of the service you talk about: Each service can be managed with the same principles – and ITIL (and ISO 20000) is a good point to start if you focus on the principles for managing services. You can define, implement, run, optimize any type of service. Whether you look on high level business services or on low level application services, the way you should handle services is, from a conceptual view, the same. The business aspects like service accounting and controlling can be applied as well on every level.
Given that, a unified view on services and their management would bring a lot of benefits to IT – the reuse of management software, improvements in that software when the experiences of infrastructure and software change management are combined and influence the tools, the capability for an overall auditing and accounting of services, a consistent authorization management for services, their management and their use.
But that would mean that the siloes at the vendor side (where software management is in most cases another division than infrastructure management) disappear as well as the siloes in today’s IT organizations are opened for more cooperation.
29.04.2008 by Martin Kuppinger
One of the panels at the recent EIC 2008 on End-to-End Security for SOA applications there was a discussion about whether this target could really be achieved. One comment was that built-in federation awareness in every single web services won’t work with thousands of web services you might have today or in future. The handling of trusts would be too complex, was the argument.
Yes, if you handle every trust separately. No, if there is sort of a trust broker for at least most of the web services which provides a standard trust with no specific configuration per web service. In that case even that concept might work – and federation-enabling web services could be done by the application these services run on.
But it can be done easier, in the context of Web Service Security applications or other approaches. My position is that a web service has to run in the context of the user’s identity. Usually the context will be derived, e.g. a role, a group or something else. A layer like the Web Service Security should be able to work with such a context, which might be provided within a SAML token. But, in general, it might be any type of claim – Kim Cameron’s concept of claim-based security fits in pretty well here.
In fact, the issue can be solved very easy: Take the information in a claim or assertion, transform it to a parameter and invoke the web service based with this parameter. Then the web service can return exactly the information which is relevant (or allowed to see) to the identity the parameter has been derived from. The application infrastructure has just to work as a special type of STS (Security Token Service) which transforms security tokens into parameters for web services.
With this approach, it is as well possible to completely implement the idea of claims into SOA security. The accounting of web services works as well, because the platform from which web services are invoked knows about the identity (or something derived from), because it knows the claim or assertion. And the web service itself can be fully identity- and federation-ignorant.
In fact, there is no reason not to implement a real end-to-end security, either with Federation and an efficient trust handling or with a claims-/assertion-/parameter-based approach like described.
29.02.2008 by Martin Kuppinger
In a, may be, simplistic view on IT there are three important pillars on the IT infrastructure level. Using the – sometimes improper – buzzwords, these are
- Identity (and Access) Management (IAM)
- SOA - in fact more the technologies for business processes and flexible applications, e.g. including BPM (Business Process Management)
- BSM (Business Service Management), or ITSM (IT Service Management), or BTO (Business Technology Optimization), or however you will name what has been systems management and now, with a new layer on top, is something “entirely new”. I would say it claims to be something new but the layer on top is far from being mature.
You might claim that the Enterprise Systems are missing in that list. Yes, they are missing. No, they are in, because SOA or BPM are the way to use these systems in the future – have a look on the strategies of SAP with NetWeaver or Oracle with Fusion.
Read the rest of this entry »
26.11.2007 by Martin Kuppinger
These days I have written a report on the relationship between IAM (Identity and Access Management) and SOA (Service oriented Architecture/Applications). One major aspect of this relationship is around end-to-end-security, e.g. securing the interaction of a user with an application (and the application which implements a business process) up to the backend systems like databases.
That is inevitable because using a service in the context of an user identity or an user role is the only way for consistent, externalized security instead of coded security where some return of a service is filtered by the application depending on the user’s role. Coded security is contradictory to compliance, obviously. It’s expensive in terms of coding and auditing. Thus, it doesn’t make sense.
On the other the most common approaches for web service security are constructed the same way as web access management solutions: Building a layer in front of the services which uses policies to decide how services are used. That includes some part of authorization and sometimes authentication. The problem is: Using such an approach means that there is definitely no end-to-end-security. From my point of view, there is no alternative to federation to transport claims down to the service level. That is the only approach for real end-to-end-security and thus for applications which are architected to fulfill the increasing compliance requirements.
22.11.2007 by Martin Kuppinger
At a workshop I have held yesterday I had an interesting conversation about some aspects of IAM – especially the way, IAM products are developed without reuse of existing technologies. The discussion isn’t really new to me. I have discussed some of the aspects some five or six years ago with one of the leading IAM vendors. A fruitless discussion, by the way.
MDM, e.g. Master Data Management, is a concept for building and maintaining master data, for example for supplier data or material data. There is no real difference to what meta directory services are providing. The only real differentiator are the specific connectors. But the basic concepts are the same. The concept of delivering data quality is inherent to MDM, sometimes based on sophisticated pattern matching approaches. That raises the question: Why don’t we use these technologies for many of the aspects which are done today by proprietary IAM products?
EAI, e.g. Enterprise Application Integration, is an approach for using sort of bus systems to connect different systems and to exchange any type of information. Some two days ago a vendor told me that some of its customers are using EAI (or enterprise service busses) to exchange SPML for the integration of different provisioning systems. Siemens, by the way, used such a technology some time ago. The customers argued about the complexity of this approach. On the other hand such technologies are widely deployed in larger corporations, are very flexible regarding their connection to databases and the core business applications, and ensure a reliable transport. Thus, they often provide functionality which is missing for example in provisioning systems. Again this raises the “why” question.
The provisioning-specific workflows are another example, even while the vendors start to fix this and to support other, external workflow systems which often offer a broader functionality and interfaces to process management tools.
My answer to the “why”-questions is pretty easy (and in fact, it are two answers): I assume that many of the architects of today’s aren’t familiar with the concepts I’ve mentioned and other important IT concepts. And you can’t use what you don’t know. The second part of the answer is: In the first step it is much easier to build a system without integrating these sometimes pretty complex approaches. But on the long run it’s inefficient.
Besides this there are two perspectives: From the IAM only perspective using MDM or EAI as a foundation leads to more complex products. From an overall IT perspective, it leads to less complexity. Thus, it is also a question of the point-of-view. Anyway: I believe that it a least will be helpful to have a look beyond the common IAM approaches. That’s what vendors really should do these days. The example of workflows which are more and more externalized proves that there is some need to do that. By the way: Doing that might as well lead to new competition. Think about MDM or EAI specialists and some other company which focuses on connectors. There might be interesting business models for both of them to successfully compete in the IAM business.
22.11.2007 by Martin Kuppinger
Have you ever thought about assigning the IT costs in a correct manner? Services and IAM will help you. Services are a means for a more granular view on what IT provides. That is true as well for the IT infrastructure services which are, for example, covered in ITIL. It is true as well for the services used in SOA concepts. But services aren’t sufficient. The assignment of IT costs requires the knowledge about the user. Who is using which services in which frequency? This question has to be answered as well. That means, that you have to know in the context of which user a service runs or – more abstract, for infrastructure services - is used.
Thus, bringing IAM and BSM together and combining IAM with SOA is the foundation on which a more efficient IT cost management could be build. And it is, as well, the foundation for the thing I would call ERP for IT.
16.10.2007 by Martin Kuppinger
A side effect of application security infrastructures
When writing my upcoming report on the architecture of application security infrastructures I thought also about potential business values of this type of service layer which sits between applications and the security infrastructure (in fact the term “application security infrastructure” is somewhat misleading because its more about a service layer which sits on top of the infrastructure – and the service layer is core, not the infrastructure). When thinking about the business values it became clear to me that there is a clear link to what I have written in “The ERP for IT” about the chance to use service orientation for making IT sort of a business unit.
Application Security Infrastructures can support IT to become more business-oriented and more economic. How? Very easy: These infrastructures expose defined services (security services, mainly identity services) to applications and network infrastructure components (for example “identity storage services” as interface to directories). The usage of these services can be measured. The costs of the underlying infrastructure can be measured as well and is related to specific services. So, in effect, you have the cost per use per service.
With that information you can for example predict the costs of new applications much more precise than before. You can assign the costs of the infrastructure much more precise than before to the consumers of the services. You can offer more efficient services for lower costs. And so on… IT can act like a business unit or, more familiar, like an “internal outsourcer”.
That is, from my point of view, one of the biggest advantages amongst the pretty long list of business values an application security infrastructure can deliver. For sure that isn’t unique to application security infrastructures, but applies to any move towards service orientation.
12.10.2007 by Martin Kuppinger
Oracle today announced that they’d like to acquire BEA and have placed a bid for BEA. The BEA management on the other hand seems to not be willing to become a part of Oracle. To me, it’s somewhat surprising that Oracle looks on BEA. Oracle has its own middleware product and, from a technical perspective, I don’t see the urgent requirement to buy BEA. BEA, for sure, is one of the leading vendors in the market space but I don’t expect them to add that much value at least from a technical perspective to Oracle that it would be worth to pay the pretty high price.
So there is mainly one reason for this bid: Market share. If you combine the market shares of both companies the result will probably be the market leader, in front of SAP and IBM as closest competitors. With respect to the ongoing “battle” between Oracle and SAP its reasonable from an Oracle perspective to invest in marketshare especially in the core segment of competition, the middleware or application infrastructure market – however you name this market segment.
Beyond this, the Oracle bid for BEA points to another thing I have in mind for a long time: BEA is the only large vendor in this segment which is focused mainly on middleware. I doubted that this is sufficient. From my perspective, BEA and BMC would be an interesting fit – much more interesting than Oracle and BEA, because that’s really mainly about market share. Combining BEA and BMC would led to a strong vendor which competes against IBM and others, with a broad offering covering the infrastructure as well as the applications and thus delivering the basis for sort of an ERP for IT. But if I look back to the PeopleSoft acquisition, I doubt that anyone else will acquire then Oracle.
So we can expect Oracle to set the next milestone in the mentioned “battle” against SAP through expanding its market share in the middleware market – and sit back and wait for a response from SAP.