13.05.2013 by Martin Kuppinger
Since my colleague Craig Burton has declared that SAML is dead, it seems to be in vogue among analysts to take the role of the public medical officer and to diagnose the death of standards or even IAM (Identity and Access Management) in general. Admittedly, the latter case was not about diagnosing the death but proposing to kill IAM, but that does not change much. The newest in this series of dead bodies is XACML, according to another Industry Analyst. So we are surrounded by dead corpses now, or maybe by living zombies. But is that really true? My colleague Craig Burton titled his blog – for a very good reason – “SAML is Dead! Long Live SAML!” That is fundamentally different from saying “XACML is dead”.
There are a lot of good answers from experts such as Ian Glazer, Gerry Gebel (OK, he might be a little biased being the President of Axiomatics Americas), or Danny Thorpe.
I am clearly not suspicious being the enthusiastic XACML evangelist wearing blinders. Just ask some of the Axiomatics guys – we had many controversial discussions over the years. However, for me it is clear that neither Dynamic Authorization Management in general nor XACML in particular are dead.
What puzzled me most in this blog post was that part of the initial sentence:
XACML … is largely dead or will be transformed into access control
OK, “access control”. XACML is access control. Access control is everything around authentication and authorization. So what does this mean? I just do not understand that sentence, sorry. XACML is a part of the overall Access Control story.
From my perspective, the two most important concepts within access control are Dynamic Authorization Management and Risk-/Context-Based Access Control (i.e. both Authentication and Authorization). The latter only will work with Dynamic Authorization Management in place. When we know about the context and the risk and make authorization decisions based on that, then we need systems that externalize authorization and rely on rules that can take the context into account.
The challenge with Dynamic Authorization Management, i.e. technologies implemented in a variety of products such as the Axiomatics Policy Server, the Oracle Entitlements Server, the IBM Security Policy Manager, Quest APS, and many others, is that it requires changes in both application code and the mindset of software developers and architects. That is a long journey. On the other hand we see some increase in acceptance and use of such technologies. Notably, Dynamic Authorization Management is not new. You will find such concepts dating back to the mid ‘70s in mainframe environments, and IBM’s good old RACF can be consider an early example for that.
You still can argue that Dynamic Authorization Management is alive but XACML as the most important standard around it is dead. There are good arguments against that, and I will not repeat what the others mentioned above have said. You might discuss where to use XACML and where to rely on proprietary technology. However, do you really want to lock in your entire application landscape into a proprietary Dynamic Authorization Management technology of a single vendor? That would be a nightmare. You need to isolate your applications from the Dynamic Authorization Management system in use, and a standard helps in doing that. Just think about being locked into proprietary interfaces for all of your applications using a specific Dynamic Authorization Management system for the next 30, 40 or more years.
XACML even is the better choice for COTS applications. They can rely on a standard, instead of every vendor building proprietary connectors. Most vendors will do that for Microsoft SharePoint, because SharePoint is so important. But that is the exception, not the rule. And deducing from the fact that vendors support SharePoint with proprietary interfaces (instead of using XACML) that XACML is dead is just a wrong deduction. The problem in that case is not XACML but the SharePoint security model that clearly is not the best I have ever seen (to say the least). XACML is of value. Standards are of value. And I believe you would need much better reasons to diagnose the death of standards.
To learn more about the real trends in IAM, IAG, Cloud Security, and many other topics, just visit the EIC 2013 that starts on Tuesday, May 14th.
01.05.2013 by Martin Kuppinger
Last week, Google announced that it has joined the FIDO Alliance. FIDO stands for Fast Identity Online. The alliance was formed in July 2012. The mission is to change the nature of online authentication by providing interoperability among strong authentication devices. The alliance is working on specifications for an open, scalable, interoperable set of mechanisms that allow secure authentication of users to online services without the need for passwords for each of these services. It wishes to become a standard that allows using both existing and future strong authentication devices (those that support the FIDO standard), in an interoperable way.
This is in fact about “versatile authentication” from scratch, enabled in any device. Currently, many organizations are investing in versatile authentication technology that allows them to flexibly change and combine different authentication mechanisms. With FIDO, that could become a standard.
Users can use a choice of different mechanisms for strong authentication, including hardware tokens, embedded hardware such as TPMs (Trusted Platform Modules), biometrics, etc. The website will recognize the devices as “FIDO devices” and enable them. Once a strong authentication device is connected to a site, it can be used the same way it has always been used.
FIDO requires a browser plugin, which is the simple part of the story. It also requires a device-specific module that must be installed to use the “FIDO authenticator”, i.e. the strong authentication device of choice. The website or online service must also support FIDO.
Success of FIDO will depend on two factors. There must be a critical mass of online services supporting FIDO. Given that several large service providers already are members of the FIDO alliance, that might happen. Secondly, there is the need for a critical mass of users that use strong authentication devices with FIDO support. The challenge in that area will be a simple enablement of FIDO through browser-plugins (even better if they are pre-installed) and especially the availability and simple deployment of device-specific modules.
On the other hand there clearly is the question of whether FIDO will gain sufficient support and acceptance amongst the vendors. What will the vendors of strong authentication devices do? What will the vendors of versatile authentication platforms do? And what will the providers of online authentication services do?
From my perspective FIDO could help all of them. It provides the opportunity for “strong authentication for the masses”, for a ubiquitous approach that works for everyone, with flexible choice of strong authentication devices. The providers of Versatile Authentication Platforms can still provide the server-side interfaces, but with more flexibility in supporting different devices. And providers of online authentication services can still act as brokers and service providers – for many online services that will remain the better choice than direct support for FIDO. There might even be services that are brokers for “non-FIDO clients” and act as FIDO clients.
Overall, there is a good potential for the FIDO Alliance, despite the fact that it requires the installation of a client component. I greatly appreciate everything that makes the Internet more secure. I will closely watch the progress of the FIDO Alliance. However, I have seen so many concepts in that area that I would not bet on their success.
30.04.2013 by Martin Kuppinger
On Thursday, I was moderating a panel discussion at infosecurity Europe (InfoSec), the leading UK security fair, which hosts a program of keynotes and panel discussions. My panel was titled “Smarter security spending: Optimising spend without exposing the business”. Panelists were Dragan Pendić, Chief Security Architect, Global Information Management and Security, at Diageo; Michelle Tolmay, Security Officer, ASOS; Cal Judge, Information Security Head, Oxfam; and Graham McKay, CISO, DC Thomson.
We had a very interesting, well-attended session with some interesting questions during the Q+A following the panel discussion. Key take-aways for smarter security spending we came upon during the discussion were
- Common Language
- Big Picture
Getting the users on board was one of the most important themes of the discussion. Without increasing involvement and understanding of people for Information Security, it is hard to get the buy-in and support you need, from both management and the end users. This is an important element within what KuppingerCole calls Information Stewardship.
Involvement of people is tightly related to the need of a common language – talking in business terms instead of tech talk. Information Security is about the I in IT, not primarily the T – business is interested in protecting information, not technology. The latter is just a means to protect information.
For that common language, the concept of “risk” is of central importance. Business thinks in risks. Managers are used to basing their decisions on risk. Mitigating and taking risks is part of their daily job. Risks also help in moving IT from the role of the notorious naysayer to the business enabler. If business requests a service, instead of pointing at all the technical challenges and no-gos, it is better to show some options, their benefits, their cost, and the associated risks. That enables the business to make informed decisions.
Risk, on the other hand, is the foundation for smart spending when investing in Information Technology – the T in IT. Understanding the risk mitigation impact of such technology and the benefit for the business helps in making better decisions. It helps in moving from point solutions and decisions made in “panic mode” after an incident towards structured, well-thought-out decisions based on the best risk/reward ratio (RRR). This always includes understanding the big picture – how do new solutions fit into the bigger picture? Smart spending requires a smart balance between defining and understanding the big, strategic picture and tactical steps towards this that provide the best RRR.
To learn more about that, join us at EIC 2013 – the European Identity and Cloud Conference, Munich, May 14th-17th. Starting with my opening keynote, the topics discussed in that Infosec panel will play an important role throughout the entire conference.
29.04.2013 by Martin Kuppinger
The number one issue in the past weeks is the LivingSocial hack, where attackers reportedly have stolen massive amounts of personal data, including names, eMail addresses, birthdates, and encrypted passwords. LivingSocial has confirmed an attack, but not the reported number of 50 million stolen data sets – which would be the vast majority of all LivingSocial users.
However, there still is relatively little information about the details. It is still unclear whether all non-Asian accounts are actually affected. (LivingSocial holds the Asian accounts on another server.) It is not publicly known how the passwords have been encrypted and thus it remains unclear to what extent the attackers might use them for subsequent attacks on other websites. Fortunately, it appears that the credit card information of the LivingSocial users is held in separate databases and is not affected by the attack.
Given that this sort of attack against large sites happens regularly, the question becomes what lessons are learned and what defenses should be taken. The lessons for the companies running such sites clearly are to invest in security, for both protection and monitoring. However, successful attacks will happen and, in contrast to some former incidents at other sites, LivingSocial at least encrypted the passwords and used a separate database for credit card information.
For the users, the answer is also straightforward: raise the bar for authentication. Reconsider using sites and services if they do not provide options for stronger authentication such as (good) 2FA approaches. Clearly using different hard-to-guess passwords is an option, but that is fairly inconvenient – my colleague Craig Burton once stated that you do not have such thing as a password muscle you can simply strengthen by training.
FIDO Alliance and Google
Another interesting bit of news is the uptake of the FIDO Alliance. Google now is also a member of this alliance and there is some chance that the FIDO Alliance might gain sufficient momentum to become a success. I will cover this in a separate upcoming blog post.
Reported number of attacks
During the past few weeks, several companies such as Symantec, IBM (X-Force Report), or Akamai have published their security reports talking about the observed number of attacks. I found two actually interesting aspects in these numbers. One is that the numbers are highly inconsistent. Some companies report massive increases in attacks, others some decrease at least for certain types of attacks.
The other interesting finding is one in the Symantec Internet Security and Threat Report 2013. The report says that the number of targeted attacks increased by 42 percent. This number stands for a shift towards industrial espionage, with small business being affected in 31 percent of those attacks. Direct attacks differ from the large-scale phishing attacks in that the attackers are looking for specific data or to cause concrete harm against specific targets, instead of just trying to phish as much data from their rather anonymous victims.
Data Broker Acxiom to sell data back to real owners?
You may not have heard of Acxiom, a company that describes itself as an “enterprise data, analytics and software as a service company” that is “known worldwide for our marketing database and consumer data”. There was a report that Acxoim plans to introduce a service that allows individuals to reveal the information Acxiom knows about them. In Germany, such services are mandated by law. For instance Schufa, a company that provides information about the financial credibility, offers such service. This is considered a part of your fundamental rights, in that case the “right for informational self-determination”.
Making a business out of this is a somewhat strange thing from a European perspective. In fact what Acxiom is said to plan is that people have to pay to learn about their data. The fundamental difference here obviously is whether “data about you” is “your data” per se or not.
19.04.2013 by Martin Kuppinger
A few days ago I read an article about “disruptive technologies” in the (glossy) customer magazine of a large system integrator. The article mentioned technologies such as Big Data, Cloud Computing, or Mobile Computing. But are these technologies really disruptive?
The definition of “disruptive innovation” in Wikipedia is as follows:
A disruptive innovation is an innovation that helps create a new market and value network, and eventually goes on to disrupt an existing market and value network (over a few years or decades), displacing an earlier technology. The term is used in business and technology literature to describe innovations that improve a product or service in ways that the market does not expect, typically first by designing for a different set of consumers in the new market and later by lowering prices in the existing market.
Amongst the most prominent examples of disruptiveness are the replacement of sailing ships by steamboats in transportation or of horse-drawn carriages by automobiles. Sailing ships play virtually no role at all in transportation anymore, at least not in the western hemisphere and across long distances. The same is true for horse-drawn carriages.
When looking at the technologies mentioned at the beginning, Cloud Computing is the most disruptive one from my perspective. Cloud Computing has massive impact on licensing models, even for on-premise IT technology. However, is “Cloud Computing” really disruptive? Or are just some parts of Cloud Computing such as SaaS vs. on-premise software disruptive – and maybe even some areas therein such as Office applications?
And where is the disruptiveness in Big Data? I do not see technologies being replaced by Big Data. Big Data allows for new types of solutions, but it is not disruptive at all.
When looking at Mobile Computing, one might argue that the recent drops in sales of PCs is a clear indicator of disruptiveness. And yes, there is some likeliness that the classical PC market will shrink further. However, Mobile Computing appears to be too unspecific as the disruptive innovation. There are tablets, smartphones, phablets, etc. that are challenging the PC market. But do we really know how the future will look? I just recently switched back from the Apple iPad to more productive devices, i.e. a tablet PC (ultrabook), and, in addition, Microsoft Surface RT that at least comes with Office apps and is able to display a Word document or Powerpoint in correct formatting. And I have a classical PC under my desk, with three 27” displays attached – mail to the left, Word in the middle, browser and Skype to the right. The different trends within Mobile Computing are disruptive for traditional PC technology. But what will be the result? We just do not know yet.
Overall, not defining everything as disruptive might be helpful. Many things that appear to be disruptive during the hype turn out to be not that disruptive. The reality of most organizations will be hybrid environments for the foreseeable future – and not pure Cloud Computing. There will remain a significant need for on-premise IT, for desktop PCs, and for classical databases and BI (Business Intelligence). It is important to look at new opportunities, but doing it with some realistic distance helps – especially in IT.
17.04.2013 by Martin Kuppinger
This week, I read an article (in German) about a severe security bug in heating systems provided by Vaillant, one of the larger manufacturers in that space. The issue was found in so called “nano block heating systems” that are made for detached houses and duplex houses.
The entities have an IP-Interface that allows both the service technicians of the vendor and the owner of the heating system to remotely manage the device. However, a security bug allows pretty much anyone to easily access, in clear text, the passwords of the owner, the technician (expert), and even the developer. In other words: attackers can easily gain full access and control all settings. That allows increasing the temperature of the outgoing water in summer, which can damage the heating element. It allows stopping heating in winter, which could result in frost damages. There most likely are other types of damages an attacker can cause.
Even worse, these systems communicate with the DynDNS (Dynamic DNS) service of the vendor. That allows attackers to identify all systems in a simple way, just by “trial and error”.
Vaillant has announced that they will inform the customers, update the software – which requires, despite having an IP interface, that a technician visits the customers – and provide VPN communication for technicians.
This issue is a perfect example of what is happening these days in smart metering and other areas of “smart homes”. Vendors start adding IP interfaces, but they fail in security. In the entire segment of home automation, which is based on standards such as EIB/KNX, understanding of security issues appears to be rather limited. Security is understood as “availability”, not as being secured against attackers. That is, by the way, true for other standards as well – most bus systems in manufacturing are not secure at all. EIB/KNX does not even have a security layer. These bus systems typically rely on simple broadcasting. Who has access to the bus, has access to everything. Once you connect the bus to the Internet, things become obviously highly insecure.
The obvious solution for that is protecting the IP interface. However, as long as that is not done perfectly well, the problem remains. The entire manufacturing industry, but also the automotive industry and others that rely on rather primitive bus systems, have to fundamentally rethink their security approaches. Not doing this is wantonly negligent.
Smart infrastructures require smart security. Not having well-thought-out and well-implemented security approaches in place but relying on stone-aged security approaches for (sometimes) stone-aged bus systems puts us all at risk. There is a good reason for the massive potential of Stuxnet: It arises by opening up unsecure environments – unsecure by design – to the Internet, without appropriately changing the security approaches.
02.04.2013 by Martin Kuppinger
During the past few days, there have been at least two notable events in security. One was the attack on South Korean banks and TV networks. The other was the “Spamhaus incident”. I will talk about these two more in detail further down that post.
Besides that, it was interesting to observe that iOS and OS X seem to become increasingly the malware targets of choice.That is not surprising, however, since there are masses of iOS and OS X devices out there. Thus, the platform is far more attractive than in the past. Combined with the fact that Apple’s patch policy still is not convincing, this results in an increasing number of attacks. When I count the platform related news of the past two weeks in my CNET RSS feed, then 5 out of 6 articles were related to the two Apple operating systems. That just confirms what I have been saying for a long time: It is not that much about whether a platform is secure or insecure; it is about reaching a critical mass to become a target of choice for attackers. They will always find weaknesses, because complex systems never will be perfect. By the way: It would only be fair if the castigators of Microsoft Windows security from the past would act the same way now regarding Apple. Microsoft has learned a lesson. Has Apple already learned its lesson? I doubt that.
One other interesting news article was about Java updates. According to a new Websense report, 94% of endpoints running Java are vulnerable to at least one exploit. This shows that Java Updates do not work well as of now. One of the issues clearly is that Java runs on a variety of devices. While updating PCs is straightforward, other devices – especially the ones where Java is deeply embedded – are hard to update, due to a lack of a simple, standardized approach for patching these devices. From my perspective, Oracle should concentrate on adding sort of “patch support by design” capabilities to all future Java versions. While many people criticize the Microsoft Update concept, it is – from my perspective – by far the best approach that is currently in place across the entire industry.
South Korea vs. North Korea
Last week, some South Korean companies – TV broadcasters and banks – were hit by a massive cyber-attack run by a group that calls itself “Whois Team”. There were clear signs that the attack was part of the ongoing “cold war” between South Korea and North Korea, which currently is escalating again. Despite the fact that it is still unclear where the attack originated, I think that this is another indicator for the emerging risks of cyber-attacks in conflicts between nations.
The “Spamhouse incident”
Finally, a cyber-fight between Spamhouse, a spam-fighting organization, and a group of attackers even made it to the TV news over here in Germany and in other countries. This attack is reported to be the largest DDoS (Distributed Denial of Service) attack ever. It reportedly affected the whole Internet, especially in the U.K., Germany, and the Netherlands (Spamhouse is based in the Netherlands). There are two lessons we can learn from that. One is that the Internet, despite its distributed nature, is not immune to attacks. The second is that obviously cyber-criminals are well prepared to counter attacks against them, having large botnets on hand to place such DDoS attacks.
Physical Attacks on Critical Infrastructure
What I also found interesting were some articles about the Egyptian police arresting three men that tried to cut through some cables for Internet connectivity owned by the Egypt Telecom network. Some days ago, other cables of the Seacom network, being a part of the Internet connecting various countries under the Mediterranean Sea, were destroyed. The Egyptian police arrested the divers that tried to cut through the cables of the Egypt Telecom in action, from what was reported. I have not read anything about the motivation of these attackers. However, this clearly is another indicator of the massive risk for Critical Infrastructures these days.
27.03.2013 by Martin Kuppinger
I read an interesting article about the future of vehicles and their connectivity in the Geo magazine, sort of the German counterpart to the National Geographic magazine. The article was quite interesting; however, I did not find anything about security. This is not a new experience: most of the articles and discussions about the concept of connected vehicles and their integration into the smart grid (plus all the discussions about smart grids and smart infrastructures) still are security-agnostic.
Do we really want to drive unsecured connected vehicles? Do we really want to live in a smart but unsecured world? How smart will that world really be? I have blogged about this way before. In these days of increasing cyber-attacks and of an increased understanding of the risks of critical infrastructures, agnosticism regarding security is not acceptable anymore.
The article discussed concepts like using electric vehicles as a storage for electric power, as sort of a distributed, large battery for storing power from the large power networks. This is a great idea; however, thinking about the required connectivity for that, just in the context of correct billing alone, shows that this is an interesting topic from both the security and the identity perspective.
At EIC 2012, we held a workshop on the topic of the connected vehicle. We had a very intense discussion there. We quickly identified a complex ecosystem of identities that need to share data. However, most data must be shared only between a few selected parties. There are the owner, the driver, the leasing company, the passengers, the garage, the insurance company, the vendor, and the manufacturer, to name just a few of the possible interested parties. Within the car there are components provided by many different manufacturers which might talk to others – or not. There are other cars, there are traffic management systems, there is the police, etc. Not to mention the utilities companies here… It is an extremely complex ecosystem.
Within that ecosystem, sharing of data must be very tightly managed. Some data might pass to the police only, while other data must not go there. However, that might differ from country to country. Some data is only relevant to the driver or the vendor; other data should be also available for the manufacturer.
However, sharing of data is the smaller part of the challenge. The need for well-controlled security and identity becomes even larger when we are talking about controlling the car or the traffic in general. The idea of cyber-criminals taking control of vehicles is frightening.
I know that several car manufacturers are investing in PKI and related technologies to secure communication among various components. That might work for the components within a car, but it will not be sufficient for the bigger ecosystem of the connected vehicle I have outlined above. What we need are bigger concepts, cross-industry, integrating all the related parties and components. The good thing is that many of the answers to the challenges of a connected vehicle are there. Life Management Platforms are one element, which allow managing a lot of related information in a privacy-aware and security-aware manner. The API Economy and API security is important for managing security of all the interfaces in these complex, connected systems. Identity Federation is an important piece of the puzzle as well. However, what I still miss is both a clear view of the big picture and coordinated initiatives for a secure smart planet, including the connected vehicles.
It is past time to act. At EIC 2013, we will have a roundtable for the Automotive Industry – a good place to connect with others. We will have various sessions around Life Management Platforms, the API Economy and other security topics. So do not miss EIC 2013 when you are involved in securing the smart planet of the future and when you are looking for a more holistic approach instead of point solutions for various pieces.
25.03.2013 by Martin Kuppinger
Recently I had some conversations with both vendors and customers about licensing models for IAM (Identity and Access Management) software. Historically, most licensing models were (and still are) based on the number of users, typically “named” users (rather than “concurrent” users). License models based on the number of concurrent users are rather unusual for IAM.
Nowadays, I observe some shift towards models that are based on the number of connections or even processor-based. The number of connections is a metric that shows up in federation products, where the connection typically is defined as “a connection from the federation hub to a target system, either Identity Provider or Service Provider”. However, vendors might also focus on “concurrent connections” in the sense of users federating. I have also seen approaches that are about billing per connection, i.e. based on the actual use of a federation service, in cloud-based offerings.
I also have been involved in discussions between customers and vendors about dealing with externals (contractors, clients, vendors, etc.). When looking for an Identity Provisioning or Access Governance solution with focus on the employees, a licensing model based on named users is straightforward. It is predictable. However, once the number of external identities grows, the question of changing the metric arises. Should an external user that typically has somewhat limited access cost as much as the regular, internal user? I have seen different approaches ranging from the full fee to a percentage of the regular user fee or even flat rates for external users.
Finally, there is the discussion about classical license-plus-maintenance models versus subscription-based models without the initial fee but a constant annually rate to pay.
So what is the best model? Honestly, I do not know what the perfect model is. I even doubt that there is the perfect model for licensing. However, both vendors and customers should concentrate on the characteristics of a “good” licensing model, besides the fact that the vendor wants to earn as much as he can and the customer wants to pay as little as possible. These are, from the customer perspective
- Flexibility for adopting the model as needs change
- Flexibility to change the vendor
The first one probably is the most important one. Customers need to be able to calculate the cost in advance. That works well for flat rate models, but it does not work for models where either the user base can grow massively – think about the Identity Explosion – or which rely on the use of a service. Models that are based on a flat fee for external users, an overall flat fee (does not work well for vendors in most cases) or other factors like the number of connections to IdPs and SPs fulfill that requirement. Also processor-based licensing works quite well because it scales slowly and in a predictable manner.
The flexibility to adopt models as needs change – by both scaling up and scaling down – is another important factor. However, this again is about predictiveness. Adding new groups of users, new systems, etc. must be predictive. Doing that right can be rather attractive for customers, when they can start small with a one or two partner case and then add other federation partners or systems subsequently, with a fixed cost per added partner/system.
The flexibility to change the vendor clearly is not in the interest of the vendor, but the customer. The initial license fee is an inhibitor for change. When you have to pay 500,000 € or US$ in advance just for licenses, it is much more difficult to build the business case for switching to another vendor than when relying on subscription-based models with a lower “entry fee”.
I recommend both vendors and customers to consider these criteria when looking at pricing models and rethinking existing business models. The most important question is: will success become too expensive? Or, in other words: will the Identity Explosion destroy my calculation? Overall, I see a shift away from purely user-based licensing in most disciplines of IAM. Dealing with more types of users requires different answers.
19.03.2013 by Martin Kuppinger
Having published our second KuppingerCole Leadership Compass (on Access Governance) some ten days ago – with many others in the pipeline – I want to look at a blog post Michael Rasmussen, a former Forrester analyst and now an independent GRC expert, published in October 2012.
I do not want to comment on the Gartner Magic Quadrant and MarketScope or the Forrester Wave. I also do not fully share the opinion of Michael Rasmussen on these. His major complaint is that documents like the ones mentioned tend to be too mono-dimensional for the needs of the customer. From my perspective, there is a value in all of these documents, if used the right way. Clearly, it is not only about picking the upper left vendor – he might be the best in the overall, condensed analyst view. Nevertheless, he is not necessarily the best one for the problem a customer wants to solve. However, for identifying a long-list of vendors, such views are quite helpful.
In our Leadership Compass documents, we take another approach. There are four categories of leaders:
- Product Leaders (Product features, maturity, etc.)
- Market Leaders (Number of customers, ecosystem, global reach, etc.)
- Innovation Leaders (Current – not past – innovativeness, support for upcoming requirements, etc.)
- Overall Leaders (Combined rating)
Beyond that, we have matrices that relate product and market leadership, product and innovation leadership, and market and innovation leadership. This allows, for example, identifying vendors that are highly innovative but still have some way to go to become both product and market leaders. For some requirements, these vendors might be the best pick. Others might opt for the ones that are current product and market leaders, even while some of them might not be highly innovative.
Michael Rasmussen illustrated this in his post by noting that some customers might need a GRC vendor that is strong in Risk Management, while others might look for one with a particular strength in Audit or Policy Management.
I fully agree. However, from my perspective the customer not only needs that information, he needs a view that relates a particular strength (or weakness) to the overall product rating. A customer might start with a focus on a particular challenge, like Risk Management for Enterprise GRC products. However, over time he will in most cases need a product offering that serves all other Enterprise GRC aspects as well, at least at an adequate level. We provide that information in the additional matrices we have added to the KuppingerCole Leadership Compass on Access Governance. We will add them to upcoming Leadership Compass documents as well.
The figure above gives one example. This view shows the strength of products for SAP-specific requirements on Access Governance – the depth provided for SAP environment – in relation to the overall product rating. While the Product Leaders are the ones on the right side, the best products for SAP-specific Access Governance are the ones more to the top. SAP GRC is the clear leader when it comes to SAP-specific features, but it is not the leader when it comes to overall Access Governance functionality for heterogeneous environments.
When looking at that matrix, a customer can opt for a solution that is fairly good in both areas. He might also opt for a combined solution where he picks a specific solution for the SAP environment and another one for “the rest of the world”.
These matrices add information and provide a multi-dimensional view of the market. Michael Rasmussen is right in his complaint that not all of the products in a market segment can be easily put into the same box. However, defining market segments and identifying players therein is important for customers when they start solving a challenge and looking for vendors.
One thing I want to add: Documents such as our KuppingerCole Leadership Compass are just one of many aids customers should use in making decisions. Besides strategy, guidelines, processes, and organization, a vendor selection process needs several stages. Documents like the Leadership Compass assist in identifying long-list vendors and even short-list vendors. However, they cannot replace further evaluation, with request for information based on the specific challenges of the customer or a PoC. That is why we provide both the KuppingerCole Leadership Compass and additional advisory services to support the customer in these subsequent stages.