29.04.2013 by Martin Kuppinger
The number one issue in the past weeks is the LivingSocial hack, where attackers reportedly have stolen massive amounts of personal data, including names, eMail addresses, birthdates, and encrypted passwords. LivingSocial has confirmed an attack, but not the reported number of 50 million stolen data sets – which would be the vast majority of all LivingSocial users.
However, there still is relatively little information about the details. It is still unclear whether all non-Asian accounts are actually affected. (LivingSocial holds the Asian accounts on another server.) It is not publicly known how the passwords have been encrypted and thus it remains unclear to what extent the attackers might use them for subsequent attacks on other websites. Fortunately, it appears that the credit card information of the LivingSocial users is held in separate databases and is not affected by the attack.
Given that this sort of attack against large sites happens regularly, the question becomes what lessons are learned and what defenses should be taken. The lessons for the companies running such sites clearly are to invest in security, for both protection and monitoring. However, successful attacks will happen and, in contrast to some former incidents at other sites, LivingSocial at least encrypted the passwords and used a separate database for credit card information.
For the users, the answer is also straightforward: raise the bar for authentication. Reconsider using sites and services if they do not provide options for stronger authentication such as (good) 2FA approaches. Clearly using different hard-to-guess passwords is an option, but that is fairly inconvenient – my colleague Craig Burton once stated that you do not have such thing as a password muscle you can simply strengthen by training.
FIDO Alliance and Google
Another interesting bit of news is the uptake of the FIDO Alliance. Google now is also a member of this alliance and there is some chance that the FIDO Alliance might gain sufficient momentum to become a success. I will cover this in a separate upcoming blog post.
Reported number of attacks
During the past few weeks, several companies such as Symantec, IBM (X-Force Report), or Akamai have published their security reports talking about the observed number of attacks. I found two actually interesting aspects in these numbers. One is that the numbers are highly inconsistent. Some companies report massive increases in attacks, others some decrease at least for certain types of attacks.
The other interesting finding is one in the Symantec Internet Security and Threat Report 2013. The report says that the number of targeted attacks increased by 42 percent. This number stands for a shift towards industrial espionage, with small business being affected in 31 percent of those attacks. Direct attacks differ from the large-scale phishing attacks in that the attackers are looking for specific data or to cause concrete harm against specific targets, instead of just trying to phish as much data from their rather anonymous victims.
Data Broker Acxiom to sell data back to real owners?
You may not have heard of Acxiom, a company that describes itself as an “enterprise data, analytics and software as a service company” that is “known worldwide for our marketing database and consumer data”. There was a report that Acxoim plans to introduce a service that allows individuals to reveal the information Acxiom knows about them. In Germany, such services are mandated by law. For instance Schufa, a company that provides information about the financial credibility, offers such service. This is considered a part of your fundamental rights, in that case the “right for informational self-determination”.
Making a business out of this is a somewhat strange thing from a European perspective. In fact what Acxiom is said to plan is that people have to pay to learn about their data. The fundamental difference here obviously is whether “data about you” is “your data” per se or not.
19.04.2013 by Martin Kuppinger
A few days ago I read an article about “disruptive technologies” in the (glossy) customer magazine of a large system integrator. The article mentioned technologies such as Big Data, Cloud Computing, or Mobile Computing. But are these technologies really disruptive?
The definition of “disruptive innovation” in Wikipedia is as follows:
A disruptive innovation is an innovation that helps create a new market and value network, and eventually goes on to disrupt an existing market and value network (over a few years or decades), displacing an earlier technology. The term is used in business and technology literature to describe innovations that improve a product or service in ways that the market does not expect, typically first by designing for a different set of consumers in the new market and later by lowering prices in the existing market.
Amongst the most prominent examples of disruptiveness are the replacement of sailing ships by steamboats in transportation or of horse-drawn carriages by automobiles. Sailing ships play virtually no role at all in transportation anymore, at least not in the western hemisphere and across long distances. The same is true for horse-drawn carriages.
When looking at the technologies mentioned at the beginning, Cloud Computing is the most disruptive one from my perspective. Cloud Computing has massive impact on licensing models, even for on-premise IT technology. However, is “Cloud Computing” really disruptive? Or are just some parts of Cloud Computing such as SaaS vs. on-premise software disruptive – and maybe even some areas therein such as Office applications?
And where is the disruptiveness in Big Data? I do not see technologies being replaced by Big Data. Big Data allows for new types of solutions, but it is not disruptive at all.
When looking at Mobile Computing, one might argue that the recent drops in sales of PCs is a clear indicator of disruptiveness. And yes, there is some likeliness that the classical PC market will shrink further. However, Mobile Computing appears to be too unspecific as the disruptive innovation. There are tablets, smartphones, phablets, etc. that are challenging the PC market. But do we really know how the future will look? I just recently switched back from the Apple iPad to more productive devices, i.e. a tablet PC (ultrabook), and, in addition, Microsoft Surface RT that at least comes with Office apps and is able to display a Word document or Powerpoint in correct formatting. And I have a classical PC under my desk, with three 27” displays attached – mail to the left, Word in the middle, browser and Skype to the right. The different trends within Mobile Computing are disruptive for traditional PC technology. But what will be the result? We just do not know yet.
Overall, not defining everything as disruptive might be helpful. Many things that appear to be disruptive during the hype turn out to be not that disruptive. The reality of most organizations will be hybrid environments for the foreseeable future – and not pure Cloud Computing. There will remain a significant need for on-premise IT, for desktop PCs, and for classical databases and BI (Business Intelligence). It is important to look at new opportunities, but doing it with some realistic distance helps – especially in IT.
17.04.2013 by Martin Kuppinger
This week, I read an article (in German) about a severe security bug in heating systems provided by Vaillant, one of the larger manufacturers in that space. The issue was found in so called “nano block heating systems” that are made for detached houses and duplex houses.
The entities have an IP-Interface that allows both the service technicians of the vendor and the owner of the heating system to remotely manage the device. However, a security bug allows pretty much anyone to easily access, in clear text, the passwords of the owner, the technician (expert), and even the developer. In other words: attackers can easily gain full access and control all settings. That allows increasing the temperature of the outgoing water in summer, which can damage the heating element. It allows stopping heating in winter, which could result in frost damages. There most likely are other types of damages an attacker can cause.
Even worse, these systems communicate with the DynDNS (Dynamic DNS) service of the vendor. That allows attackers to identify all systems in a simple way, just by “trial and error”.
Vaillant has announced that they will inform the customers, update the software – which requires, despite having an IP interface, that a technician visits the customers – and provide VPN communication for technicians.
This issue is a perfect example of what is happening these days in smart metering and other areas of “smart homes”. Vendors start adding IP interfaces, but they fail in security. In the entire segment of home automation, which is based on standards such as EIB/KNX, understanding of security issues appears to be rather limited. Security is understood as “availability”, not as being secured against attackers. That is, by the way, true for other standards as well – most bus systems in manufacturing are not secure at all. EIB/KNX does not even have a security layer. These bus systems typically rely on simple broadcasting. Who has access to the bus, has access to everything. Once you connect the bus to the Internet, things become obviously highly insecure.
The obvious solution for that is protecting the IP interface. However, as long as that is not done perfectly well, the problem remains. The entire manufacturing industry, but also the automotive industry and others that rely on rather primitive bus systems, have to fundamentally rethink their security approaches. Not doing this is wantonly negligent.
Smart infrastructures require smart security. Not having well-thought-out and well-implemented security approaches in place but relying on stone-aged security approaches for (sometimes) stone-aged bus systems puts us all at risk. There is a good reason for the massive potential of Stuxnet: It arises by opening up unsecure environments – unsecure by design – to the Internet, without appropriately changing the security approaches.
02.04.2013 by Martin Kuppinger
During the past few days, there have been at least two notable events in security. One was the attack on South Korean banks and TV networks. The other was the “Spamhaus incident”. I will talk about these two more in detail further down that post.
Besides that, it was interesting to observe that iOS and OS X seem to become increasingly the malware targets of choice.That is not surprising, however, since there are masses of iOS and OS X devices out there. Thus, the platform is far more attractive than in the past. Combined with the fact that Apple’s patch policy still is not convincing, this results in an increasing number of attacks. When I count the platform related news of the past two weeks in my CNET RSS feed, then 5 out of 6 articles were related to the two Apple operating systems. That just confirms what I have been saying for a long time: It is not that much about whether a platform is secure or insecure; it is about reaching a critical mass to become a target of choice for attackers. They will always find weaknesses, because complex systems never will be perfect. By the way: It would only be fair if the castigators of Microsoft Windows security from the past would act the same way now regarding Apple. Microsoft has learned a lesson. Has Apple already learned its lesson? I doubt that.
One other interesting news article was about Java updates. According to a new Websense report, 94% of endpoints running Java are vulnerable to at least one exploit. This shows that Java Updates do not work well as of now. One of the issues clearly is that Java runs on a variety of devices. While updating PCs is straightforward, other devices – especially the ones where Java is deeply embedded – are hard to update, due to a lack of a simple, standardized approach for patching these devices. From my perspective, Oracle should concentrate on adding sort of “patch support by design” capabilities to all future Java versions. While many people criticize the Microsoft Update concept, it is – from my perspective – by far the best approach that is currently in place across the entire industry.
South Korea vs. North Korea
Last week, some South Korean companies – TV broadcasters and banks – were hit by a massive cyber-attack run by a group that calls itself “Whois Team”. There were clear signs that the attack was part of the ongoing “cold war” between South Korea and North Korea, which currently is escalating again. Despite the fact that it is still unclear where the attack originated, I think that this is another indicator for the emerging risks of cyber-attacks in conflicts between nations.
The “Spamhouse incident”
Finally, a cyber-fight between Spamhouse, a spam-fighting organization, and a group of attackers even made it to the TV news over here in Germany and in other countries. This attack is reported to be the largest DDoS (Distributed Denial of Service) attack ever. It reportedly affected the whole Internet, especially in the U.K., Germany, and the Netherlands (Spamhouse is based in the Netherlands). There are two lessons we can learn from that. One is that the Internet, despite its distributed nature, is not immune to attacks. The second is that obviously cyber-criminals are well prepared to counter attacks against them, having large botnets on hand to place such DDoS attacks.
Physical Attacks on Critical Infrastructure
What I also found interesting were some articles about the Egyptian police arresting three men that tried to cut through some cables for Internet connectivity owned by the Egypt Telecom network. Some days ago, other cables of the Seacom network, being a part of the Internet connecting various countries under the Mediterranean Sea, were destroyed. The Egyptian police arrested the divers that tried to cut through the cables of the Egypt Telecom in action, from what was reported. I have not read anything about the motivation of these attackers. However, this clearly is another indicator of the massive risk for Critical Infrastructures these days.
27.03.2013 by Martin Kuppinger
I read an interesting article about the future of vehicles and their connectivity in the Geo magazine, sort of the German counterpart to the National Geographic magazine. The article was quite interesting; however, I did not find anything about security. This is not a new experience: most of the articles and discussions about the concept of connected vehicles and their integration into the smart grid (plus all the discussions about smart grids and smart infrastructures) still are security-agnostic.
Do we really want to drive unsecured connected vehicles? Do we really want to live in a smart but unsecured world? How smart will that world really be? I have blogged about this way before. In these days of increasing cyber-attacks and of an increased understanding of the risks of critical infrastructures, agnosticism regarding security is not acceptable anymore.
The article discussed concepts like using electric vehicles as a storage for electric power, as sort of a distributed, large battery for storing power from the large power networks. This is a great idea; however, thinking about the required connectivity for that, just in the context of correct billing alone, shows that this is an interesting topic from both the security and the identity perspective.
At EIC 2012, we held a workshop on the topic of the connected vehicle. We had a very intense discussion there. We quickly identified a complex ecosystem of identities that need to share data. However, most data must be shared only between a few selected parties. There are the owner, the driver, the leasing company, the passengers, the garage, the insurance company, the vendor, and the manufacturer, to name just a few of the possible interested parties. Within the car there are components provided by many different manufacturers which might talk to others – or not. There are other cars, there are traffic management systems, there is the police, etc. Not to mention the utilities companies here… It is an extremely complex ecosystem.
Within that ecosystem, sharing of data must be very tightly managed. Some data might pass to the police only, while other data must not go there. However, that might differ from country to country. Some data is only relevant to the driver or the vendor; other data should be also available for the manufacturer.
However, sharing of data is the smaller part of the challenge. The need for well-controlled security and identity becomes even larger when we are talking about controlling the car or the traffic in general. The idea of cyber-criminals taking control of vehicles is frightening.
I know that several car manufacturers are investing in PKI and related technologies to secure communication among various components. That might work for the components within a car, but it will not be sufficient for the bigger ecosystem of the connected vehicle I have outlined above. What we need are bigger concepts, cross-industry, integrating all the related parties and components. The good thing is that many of the answers to the challenges of a connected vehicle are there. Life Management Platforms are one element, which allow managing a lot of related information in a privacy-aware and security-aware manner. The API Economy and API security is important for managing security of all the interfaces in these complex, connected systems. Identity Federation is an important piece of the puzzle as well. However, what I still miss is both a clear view of the big picture and coordinated initiatives for a secure smart planet, including the connected vehicles.
It is past time to act. At EIC 2013, we will have a roundtable for the Automotive Industry – a good place to connect with others. We will have various sessions around Life Management Platforms, the API Economy and other security topics. So do not miss EIC 2013 when you are involved in securing the smart planet of the future and when you are looking for a more holistic approach instead of point solutions for various pieces.
25.03.2013 by Martin Kuppinger
Recently I had some conversations with both vendors and customers about licensing models for IAM (Identity and Access Management) software. Historically, most licensing models were (and still are) based on the number of users, typically “named” users (rather than “concurrent” users). License models based on the number of concurrent users are rather unusual for IAM.
Nowadays, I observe some shift towards models that are based on the number of connections or even processor-based. The number of connections is a metric that shows up in federation products, where the connection typically is defined as “a connection from the federation hub to a target system, either Identity Provider or Service Provider”. However, vendors might also focus on “concurrent connections” in the sense of users federating. I have also seen approaches that are about billing per connection, i.e. based on the actual use of a federation service, in cloud-based offerings.
I also have been involved in discussions between customers and vendors about dealing with externals (contractors, clients, vendors, etc.). When looking for an Identity Provisioning or Access Governance solution with focus on the employees, a licensing model based on named users is straightforward. It is predictable. However, once the number of external identities grows, the question of changing the metric arises. Should an external user that typically has somewhat limited access cost as much as the regular, internal user? I have seen different approaches ranging from the full fee to a percentage of the regular user fee or even flat rates for external users.
Finally, there is the discussion about classical license-plus-maintenance models versus subscription-based models without the initial fee but a constant annually rate to pay.
So what is the best model? Honestly, I do not know what the perfect model is. I even doubt that there is the perfect model for licensing. However, both vendors and customers should concentrate on the characteristics of a “good” licensing model, besides the fact that the vendor wants to earn as much as he can and the customer wants to pay as little as possible. These are, from the customer perspective
- Flexibility for adopting the model as needs change
- Flexibility to change the vendor
The first one probably is the most important one. Customers need to be able to calculate the cost in advance. That works well for flat rate models, but it does not work for models where either the user base can grow massively – think about the Identity Explosion – or which rely on the use of a service. Models that are based on a flat fee for external users, an overall flat fee (does not work well for vendors in most cases) or other factors like the number of connections to IdPs and SPs fulfill that requirement. Also processor-based licensing works quite well because it scales slowly and in a predictable manner.
The flexibility to adopt models as needs change – by both scaling up and scaling down – is another important factor. However, this again is about predictiveness. Adding new groups of users, new systems, etc. must be predictive. Doing that right can be rather attractive for customers, when they can start small with a one or two partner case and then add other federation partners or systems subsequently, with a fixed cost per added partner/system.
The flexibility to change the vendor clearly is not in the interest of the vendor, but the customer. The initial license fee is an inhibitor for change. When you have to pay 500,000 € or US$ in advance just for licenses, it is much more difficult to build the business case for switching to another vendor than when relying on subscription-based models with a lower “entry fee”.
I recommend both vendors and customers to consider these criteria when looking at pricing models and rethinking existing business models. The most important question is: will success become too expensive? Or, in other words: will the Identity Explosion destroy my calculation? Overall, I see a shift away from purely user-based licensing in most disciplines of IAM. Dealing with more types of users requires different answers.
19.03.2013 by Martin Kuppinger
Having published our second KuppingerCole Leadership Compass (on Access Governance) some ten days ago – with many others in the pipeline – I want to look at a blog post Michael Rasmussen, a former Forrester analyst and now an independent GRC expert, published in October 2012.
I do not want to comment on the Gartner Magic Quadrant and MarketScope or the Forrester Wave. I also do not fully share the opinion of Michael Rasmussen on these. His major complaint is that documents like the ones mentioned tend to be too mono-dimensional for the needs of the customer. From my perspective, there is a value in all of these documents, if used the right way. Clearly, it is not only about picking the upper left vendor – he might be the best in the overall, condensed analyst view. Nevertheless, he is not necessarily the best one for the problem a customer wants to solve. However, for identifying a long-list of vendors, such views are quite helpful.
In our Leadership Compass documents, we take another approach. There are four categories of leaders:
- Product Leaders (Product features, maturity, etc.)
- Market Leaders (Number of customers, ecosystem, global reach, etc.)
- Innovation Leaders (Current – not past – innovativeness, support for upcoming requirements, etc.)
- Overall Leaders (Combined rating)
Beyond that, we have matrices that relate product and market leadership, product and innovation leadership, and market and innovation leadership. This allows, for example, identifying vendors that are highly innovative but still have some way to go to become both product and market leaders. For some requirements, these vendors might be the best pick. Others might opt for the ones that are current product and market leaders, even while some of them might not be highly innovative.
Michael Rasmussen illustrated this in his post by noting that some customers might need a GRC vendor that is strong in Risk Management, while others might look for one with a particular strength in Audit or Policy Management.
I fully agree. However, from my perspective the customer not only needs that information, he needs a view that relates a particular strength (or weakness) to the overall product rating. A customer might start with a focus on a particular challenge, like Risk Management for Enterprise GRC products. However, over time he will in most cases need a product offering that serves all other Enterprise GRC aspects as well, at least at an adequate level. We provide that information in the additional matrices we have added to the KuppingerCole Leadership Compass on Access Governance. We will add them to upcoming Leadership Compass documents as well.
The figure above gives one example. This view shows the strength of products for SAP-specific requirements on Access Governance – the depth provided for SAP environment – in relation to the overall product rating. While the Product Leaders are the ones on the right side, the best products for SAP-specific Access Governance are the ones more to the top. SAP GRC is the clear leader when it comes to SAP-specific features, but it is not the leader when it comes to overall Access Governance functionality for heterogeneous environments.
When looking at that matrix, a customer can opt for a solution that is fairly good in both areas. He might also opt for a combined solution where he picks a specific solution for the SAP environment and another one for “the rest of the world”.
These matrices add information and provide a multi-dimensional view of the market. Michael Rasmussen is right in his complaint that not all of the products in a market segment can be easily put into the same box. However, defining market segments and identifying players therein is important for customers when they start solving a challenge and looking for vendors.
One thing I want to add: Documents such as our KuppingerCole Leadership Compass are just one of many aids customers should use in making decisions. Besides strategy, guidelines, processes, and organization, a vendor selection process needs several stages. Documents like the Leadership Compass assist in identifying long-list vendors and even short-list vendors. However, they cannot replace further evaluation, with request for information based on the specific challenges of the customer or a PoC. That is why we provide both the KuppingerCole Leadership Compass and additional advisory services to support the customer in these subsequent stages.
19.03.2013 by Martin Kuppinger
When looking through the security related news of the past two weeks, there is very little that is surprising. Again, the usual topics such as discussions about whom to accuse of cyber-attacks and about newly found attack vectors have led to a series of news articles. There also have been ongoing discussions around privacy. However, as I have said and stated in my previous security blog post: Most topics remain the same. Some weeks it is about routers, this time reports about security weaknesses in connected HP printers and some other routers (TP-Link) spread the news.
However, there have been news articles on two topics that caught my attention.
Trend Micro on ICS/SCADA security
Trend Micro published results of a test they have run to analyze the real security threats for ICS (Industrial Control Systems) and SCADA (Supervisory Control and Data Acquisition Networks) networks. These environments have been under attack by Stuxnet, Duqu, and Flame over the past years.
Trend Micro chose a small town in California and installed a virtual pumping station with a control system for water pressure. They made the station visible in the Internet. All software components existed, but no water pumps. They created three different “honeypots” with the typical weaknesses found in real world environments.
Within roughly one month, Trend Micro detected 39 attacks out of 14 different countries. The leading countries were China (35%), USA (19%), and Laos (12%). At least twelve attacks appeared to be targeted. One or more attackers repeated 13 attacks on different days. These obviously were targeted and automated. Trend Micro is still investigating the other attacks.
Clearly, there is a well-established ecosystem for espionage and cyber terrorism out there. No single organization with industrial production environments and no single organization in the “critical infrastructure” area can claim that it is not an attack target. It is past time to act and to better protect all IT environments in organizations.
Obama vs. Merkel
I also found some news articles about Obama hosting a meeting on cyber-security with CEOs and on putting cyber-threats amongst the top topics in his call with the Chinese president. This helps increasing awareness in the industry, in governmental organizations, etc.
When looking at Germany, the situation is quite different. There are infrequent statements and activities from some of the ministries. There are some activities by different governmental organizations. However, there clearly is a lack of public statements and attention from Angela Merkel, if I compare this to Barack Obama. At CeBIT fair 2013 she visited, for instance, the booth of a provider of secure smartphones, the “Merkel phone”, which allows her secure, encrypted/scrambled communication. I think that putting the cyber-threats at the top of the agenda would have been far more important than putting the focus on that phone (and the technology provider behind). Time to wake up, I’d say.
06.03.2013 by Martin Kuppinger
Yesterday I spent a day at the CeBIT fair, still the world’s largest IT fair. Besides the many interesting meetings I had previously scheduled, I started thinking about the CeBIT “Leitthema” – their “claim of the year”. This year it has been “Shareconomy”. I still do not know what this term shall mean. There is some fuzzy description at the CeBIT homepage, but in contrast to topics like “Cloud” and “Managing Trust” in 2011 and 2012 respectively, Shareconomy – described as “sharing and using information, resources and experience based on new forms of collaboration” – is a very amorphous concept. They then try to associate it with crowd sourcing, smart infrastructures and smart grids, data security, big data, etc.
In fact, I think that there is something behind this rather strange buzzword. Back in September 2012, KuppingerCole hosted an event about the 4Cs: Communication, Collaboration, Content, and Cloud, which was about enabling new ways of collaboration and communication in a secure way. That probably is what the Shareconomy is all about.
When I look at our advisory business, I see another red-hot topic. In German I’d call it “Umgang mit Dritten”, i.e. how to interact with third parties and services provided by these in a consistent, standardized way. That is about Cloud Security, Identity Federation, API Economy and security therein, etc. Opening up the perimeter and supporting business processes that integrate business partners, customers, etc. is highly important. So maybe that is also part of the Shareconomy. For sure, you will be able to learn a lot about this at our upcoming EIC – the real stuff, not the marketing buzz and fuzz. To highlight just some few sessions:
However, the thing that confused me most at CeBIT – in the context of their Shareconomy claim – was the lack of free WiFi. Sharing without connectivity? Or at least sharing without free or affordable connectivity? Will that work? I doubt it. I used my UMTS cards in the notebook and iPad respectively, because I otherwise would have had to pay 30 € for a 4-hour WiFi pass. That is far more even than in the old school hotels that still charge for WiFi. Ridiculous.
06.03.2013 by Martin Kuppinger
One of the topics I’ve been evangelizing for years is Dynamic Authorization Management. Dynamic Authorization Management is about externalizing authorization decisions outside of applications. It is about using an “application security infrastructure” which performs the authorization decisions (and manages other aspects of security like authentication, the administration of users etc.). It is about relying on security services instead of implementing security in every application.
Dynamic Authorization Management is often associated with XACML (eXtensible Access Control Markup Language). XACML in fact is a standard to implement Dynamic Authorization Management, but the concept must not be limited to XACML. In fact, Web Access Management systems implement the concept of Dynamic Authorization Management in a coarse-grain approach and some of these systems as well as some of the Policy/Entitlement Server products available provide their own, proprietary APIs.
Before discussing the best approach to implement Dynamic Authorization Management it is important to understand the basic principles and their benefits. Within the concept of Dynamic Authorization Management, an application asks the authorization system for authorization. It provides some information with this request, e.g. the user ID etc. Depending on the implementation, other attributes might be delivered in addition. The authorization systems take this information and collect additional information if required. It might ask an authentication system for more context information, receive roles from a directory service etc. It then uses that information and the business rules (authorization rules) received from a policy repository to decide about authorization. Having done that, it provides the decision back to the requesting system.
The obvious advantage is that applications do not need to manage users, authentications, or authorizations. They just ask a central (logically central, but potentially physically distributed and logically “partitioned”) system. There is no longer a need to manage authorization rules within the application. Thus there is no need to provision that information into that application.
That in consequence means that there is also no on-going need to revoke access. IAM (Identity and Access Management) is not about “ensuring that access is revoked correctly” anymore, because there is nothing to revoke (from applications). There is also nothing to grant anymore within the applications.
Everything is managed centrally. Changes are made centrally and become effective immediately. While Identity Provisioning will decrease in relevance, Access Governance will remain important. Identity Provisioning will have to cover far less targets than today, when few central instances are used as repositories and target systems do no longer hold authorization information locally. Access Governance will have to move from reviewing static access control in target systems to reviewing dynamic business and authorization rules in the central authorization system – a feature which is supported by some early adopters in the Access Governance market.
A strength of this concept is that such systems not only can enforce standard authorization rules but also business rules. Many role management projects suffer when it comes to supporting “competencies” or “constraints”, e.g. limits for the approval of POs etc. This is fairly simple to implement and enforce in Dynamic Authorization Management.
The concept in fact is not really new. In the mainframe world, it has been around at least since the mid ‘70s – you just need to look at tools like RACF, but also several proprietary implementations of large organizations for their “entitlement management systems”.
However, there is no such thing as a free lunch. The obvious challenge is performance – can such a system be fast enough for today’s business needs? The best answer is given by the users of these systems: Large banks and large eCommerce sites are relying on these approaches today.
The biggest challenge in reality is that applications have to change. That in consequence means that the way applications are architected and developed has to change. The mindset of application architects and application developers has to change and these groups have to collaborate closely with the IT Security and IT Infrastructure people. However, done right architecting and coding applications will become easier given that architects and developers no longer need to ‘bake in’ authorization, authentication, etc., but can simply rely on the external service. Obviously, providing lean and simple approaches for Dynamic Authorization Management is a key success factor for this type of technology.
Dynamic Authorization Management is not about a rapid change, it is about moving towards a better model over time. To do that, you should start now. Every single application is a win on that journey. Security risks and complexity of management will be reduced. And Dynamic Authorization Management will allow you to focus on the key issue: Allowing people to do exactly what the business wants them to do (and not more) – instead of technically granting and revoking access per application.
As always, there will be several sessions around Dynamic Authorization Management, XACML etc. at this years’ EIC: Munich, May 14th to 17th.