10.12.2012 by Martin Kuppinger
Recently, there was news here and here that a disgruntled technician of the Swiss spy agency NDB (Nachrichtendienst des Bundes) had stolen terabytes of counter-terrorism information shared between the NDB, the CIA, and MI6 (the UK spy agency). The person has been temporary arrested. It is still unclear whether he has already sold some of that information or not.
This case, together with many others like the theft of data from Swiss banks, which then is sold to German tax offices, again highlights that the biggest security risk for most organizations comes from internals. There is no doubt that the number of external attacks is increasing. There is no doubt about a massive risk for critical infrastructures. There is no doubt that also manufacturing and, in general, SCADA devices are at far higher risk than before.
However, there are two important aspects to consider:
- Many internals have privileged access, frequently with a lack of control. They potentially can steal large amounts of data and cause massive harm.
- Many of the external attacks are in fact hybrid attacks, involving internals.
For organizations, this means that they should not focus only on external attacks. The concept of perimeter security is an illusion anyway. There is no such thing as “the perimeter around the organization” anymore. What organizations have to do is to move forward to protect information, regardless from where it is accessed, where it resides, which device is used, and whether it is accessed from internals or externals. Point solutions which claim to solve this issue won’t help without the bigger picture in mind. They just increase the risk of bad investments.
However, there are some things you have to do: Access Governance and Intelligence are one of these things. Privilege Management is another one. However, Privilege Management should be well-integrated with Identity Provisioning and Access Governance/Intelligence instead of being a point solution. The most important thing to do now is to understand the big picture of information security. That’s what you should put on top of your agenda for 2013.
To learn how to best establish Information Stewardship as a principle, you should have a look at our new report “From Data Leakage Prevention (DLP) to Information Stewardship”, #70587, which has been written by my colleagues Mike Small and Dave Kearns.
27.10.2011 by Martin Kuppinger
In a recent briefing with CrossIdeas, the MBO of the former Engiweb, an Italian software manufacturer in the area of Access Governance and Dynamic Authorization Management, they demonstrated an interesting feature: Doing recertifications based on relevance. Recertification of access rights is a key element of regulatory compliance. This is done frequently on a pretty standardized schedule. Doing this once or twice a year is the typical approach. For some specific systems or groups of users, we frequently see that the intervals are shorter, e.g. some risk-oriented approach is not uncommon. However, cynics might say that the main purpose still is to make the auditors happy.
CrossIdeas now has implemented an approach they name “relevance”. Based on several criteria like the number of SoD violations, the system identifies the most relevant users for recertification. Currently it supports six different parameters. The weight of these parameters can be easily changed using sliders. The least relevant users then can be removed – again using a slider – from the result set (a relevance map), leaving only the relevant ones in there. Then recertification can focus specifically on them.
This feature isn’t a full replacement for standard, regular recertification campaigns (which are supported by CrossIdeas IDEAS – the latter the name of their product) as well. Relevance is, from my perspective, a nice concept which brings value to customers because they can easily implement focused recertification campaigns for the most relevant users in addition to standard recertification. That then not only makes the auditor happy, but helps in better mitigating access risks. Not that standard recertification doesn’t help – but there is room for improvement and CrossIdeas has demonstrated an approach to do that which will be available in the new release due later this year.
23.09.2011 by Martin Kuppinger
Today Microsoft announced that they have acquired technology assets from BHOLD, a dutch vendor of Access Governance technology. Microsoft thus now owns technology which has been missing in their IAM portfolio until now. Microsoft thus enters the Access Governance market. Whether that will happen through enhancements of their existing FIM 2010 product or by adding another product based on the BHOLD technology hasn’t been announced yet. Anyhow, the deal will change the Access Governance market, particularly regarding the offerings which are targeted to complement Microsoft FIM.
KuppingerCole will follow up on this news and provide further information as soon as it is available. Overall, this acquisitions proves that Microsoft continues investing in the broader IAM space and thus rates this market segment as important to their customers. For existing BHOLD customers, the acquisition provides new opportunities given that they are working with a much bigger vendor now. However, the impact on existing customers can be rated first when the Microsoft roadmap is unveiled. In general we recommend existing BHOLD customers to stay calm until more information is available. For customers investing or planning to invest into FIM 2010, the acquisition is definitely good news because it means that FIM will grow beyond the somewhat technical approach into a more business-oriented solution over time. However, without the roadmap being unveiled it is hard to predict when Microsoft customers really will benefit.
15.09.2011 by Martin Kuppinger
Today, the next story about banks failing in managing trading risks hit the news. It remains unclear what allowed the trader to execute unauthorized (and thus most likely illegal) transactions which lead to that loss. However, the Risk Management of UBS obviously failed. By the way: UBS had to annouce that just the day the swiss parlament started a debate about new regulations for the finance industry.
It will be interesting to hear about why that could happen. Did some people co-operate? Did the risk management system specifically for that types of transactions fail? Or has it been an Access Management problem like at SocGen some time ago, where the trader was able to control himself? Whatever the reason is, the incident proves that there is still a long way to go in Risk Management and overall GRC – not only in the finance industry.
GRC is a key task for the C-level management. It needs sufficient funding. It needs support for the organizational changes, to build an organization with a high degree of process maturity and the understanding of the GRC requirements. It needs a strategic approach to integrate Business and IT to optimally support GRC, given that most business relies on IT systems and fraud in these systems causes the most severe harm. It needs an organizational and an IT-architectural approach to be able to manage different regulations and all types of risks in a structured and efficient way.
For the ones thinking about how to move forward in GRC, today’s KuppingerCole webinar might be worth to attend. It won’t answer all questions, but it will provide some valuable hints for moving forward in GRC. For sure, this is a long journey. But I strongly believe that it is feasible to avoid incidents like the one which happened now at UBS – and to mitigate the overall risks for organizations by a strategic GRC initiative (instead of point solutions).
17.08.2011 by Martin Kuppinger
During the last years, there has been a lot of change in the Identity Provisioning market. Sun became part of Oracle, Novell is now NetIQ, BMC Control-SA is now at SailPoint, Völcker has been acquired by Quest, Siemens DirX ended up at Atos. These changes as well as other influencing factors like mergers & acquistions, failed projects, and so on lead to situations where customers start thinking about what to do next in IAM and around provisioning. Another factor is that sometimes provisioning solutions are implemented with focus on specific environments – SAP NetWeaver Identity Management for the SAP ecosystem, Microsoft FIM for the Active Directory world. Not that they only support this, but they might be just another provisioning system. In addition, especially in large organizations it is not uncommon that regional organizations start their own IAM projects. The result: There are many situations in which organizations think about what to do next in provisioning.
However, just moving from product A to product B is not the best approach. In most cases, the deployment of provisioning tools took quite a while. In many cases there have been lot of customizations been made. And even while there might be some uncertainty about the future of the one or other product (or, in some cases, the certainty that the product will be discontinued sometimes in the future), just migrating from one provisioning tool to another seems to be quite expensive for little added value.
From my perspective, it is important for organizations to move at their own pace. The approach to do that is to put a layer on top of provisioning systems. I’ve described several options in a research note (and some webinars) quite a while ago. The research note called “Access Governance Architectures” describes different approaches for layered architectures on top of provisioning products. I’ll write an update later this year but the current version illustrates the basic principle well. By adding a layer on top of provisioning, which might be Access Governance, a Portal/BPM layer, or IT Service Management (or a mix), organizations can deal with more than one provisioning tool. The architecture is more complex than just using one provisioning tool. But if you are not able to rely on one provisioning tool only, its at least an approach that works.
Organizations then can for example replace provisioning tools fully or partially. The latter is quite common if complex customizations have been made for selected target systems. Organizations can deal with multiple provisioning systems that “just appeared” for some reason - M+A, specific solutions for a specific part of the IT ecosystem, or whatever. And they can move forward more flexible than in a monolithic architecture. Yes, these approaches require some more architectural work at the beginning, but that pays off. It pays off by more flexible migrations, by avoiding migrations at all, by less “political” conflicts with some of the lobbies within IT. It even enables to change the integration layer without affecting the underlying provisioning systems. And for sure it allows to interface with target systems in a flexible way, not only using provisioning tools but service desks or other types of connectivity if required.
But, at the end, the most important thing is that it allows customers to move forward at their own pace. Thus, before you think about migrating away from your current provisioning tool, think about how you can save your investments and add value – by new functionality and by business-centric interfaces of Access Governance and the increased flexibility of your IAM environment.
10.08.2011 by Martin Kuppinger
Is there a mismatch between the reality in organizations and the implementations of at least several of the Identity Provisioning and Access Governance solutions when it comes to the representation of physical persons in IT? To me it appears that there is a mismatch.
The reality in all large organizations I know is that the real world is sort of 3-tiered:
- There is a physical person – let’s call him Mr. X
- Mr. X can act in very different contexts. You might call them roles or digital identities, however all of these terms are overloaded with meanings. I’ll give three examples for that. 1. Mr. X might be an employee of an insurance company and a freelance insurance broker for the insurance company at the same time. 2. Mr. X might be an employee of a bank and a customer. 3. Mr. X might be the managing director of both company ABC, Inc. and DEF, Inc., which both are owned by XYZ, Ltd where he is employed as well.
- In each of these contexts, Mr. X might have more than one account. If he acts as external freelance insurance broker or customer, that might only be one account. If he is the managing director of some corporations within a group, he might have different Active Directory accounts, different RACF accounts, different SAP accounts, and so on.
You might argue that these are exceptions. However, being a customer of the employing company isn’t an exception in many organizations. And, by the way: A good and valid model has to support not only a standard approach but the exceptions as well. With other words: There are few situations in which a real-world model isn’t 3-tiered.
And there are good reasons to model the systems according to that. If someone is a customer of a bank and an employee, there are very obvious SoD rules which apply to this. He shouldn’t give loans to himself. If someone is a freelance insurance broker and an employee of the insurance, the same is true. He shouldn’t manage the insurance contracts he is selling. If someone is a customer and and employee, it’s the same again. He shouldn’t give discounts, grant consumer loans, or just change the delivery queue.
However, several tools follow a 2-tiered approach. They know for example an “identity” and “accounts” or “users” which are associated with the identity. If someone has more than one such identity, the problems begin. In some cases, it is very easy to adopt the object model. In others, you have to find workarounds like mapping the unique IDs of these identities into the other identities, which then might require a lot of additional code and is error-prone.
From my perspective, supporting a 3-tiered model out-of-the-box, with
- Context, Identities,… (whatever term you prefer)
- Users (in specific systems), accounts,… (again – choose your term)
is mandatory to reflect the reality in organizations and to support the business requirements – especially when it comes to SoD policies. If you don’t need three tiers, it is easy to just use two of them. But if your tool supports only two tiers out-of-the-box, it might become a tough task to implement your real-world model. Looking at that point is, from my perspective, one of the most critical aspects when it comes to technology decisions.
28.07.2011 by Martin Kuppinger
Access Governance tools are becoming standard in IAM infrastructures. However, they mainly focus on “static” access controls, e.g. the entitlements granted to a user based on roles and other paradigms. Recertification is supported by these tools, and the solutions are maturing quickly. Thus, that part of Access Governance is easy to solve.
However, the next wave is coming with the increasing success of tools which are commonly called Entitlement Servers or Policy Servers. I tend to call them Dynamic Authorization Systems because they authorize based on rule sets and attributes at runtime. While the rules are set, the attributes are changing. I’m a strong believer in these tools and in XACML als the underlying standard for communication between the different modules and in heterogeneous environments.
But: What about Access Governance for these environments? Some of the Access Governance tools support that to some degree, allowing to pre-evaluate some business rules which use defined roles or attributes. However, many rules – especially business rules like “users of the life insurance backoffice with the role xxx and the defined constraint for signing payments up to 50,000 € are allowed to sign that type of claim” are out of scope. There is some support for testing such rules for example provided by Axiomatics.
However, I don’t see a solution which provides integrated Access Governance for all types of entitlements. Given that Dynamic Authorization Systems gain momentum, its just a matter of time until auditors will ask for such solutions. These solutions should, like modern Access Governance tools, support the lifecycle management for the policies including approvals, auditing and analysis, and the recertification of such rules. That is more complex than what is done today. But, without any doubt, we will need this soon.
It will be interesting to observe who becomes the leader in that market. The vendors in the market of Dynamic Authorization Systems themselves? The Access Governance vendors? New startups?
By the way: The topic isn’t that new – look here.
21.07.2011 by Martin Kuppinger
Data Sprawl appears to me to be one of the biggest challenges in information security. And, by the way, Data Sprawl is not an issue that is specific to Cloud Computing. It is a problem organizations are facing day by day.
What happens when data is extracted from a SAP system? One example: a CSV (flat) file is created with some data from the HR system. This file is delivered to another system, in best case using some secure file transfer. But what happens then? That other systems processes the file in some way or another. It might export some or all of the data, which then ends up in yet another system. And so on…
The point is: Once data leaves a system, data is out of control.
The problem is that this might happen not only with one CSV file but with 100′s of them. And dozens of systems exporting and importing that data. Governance is difficult to implement. You can define a process for allowing exports. You might defined even rules for the use of exported data. You might review the exports regularly – are they still needed? However, reviewing what happens with the data at the target systems (are the rules enforced?) is pretty complex. But there is, up to now, no technical solution to solve that problem.
Things become even worse with Data Warehouse and Business Analytics. Data frequently ends up in large data stores and is analyzed. That means that data is combined, sometimes exported again, and so on. How do you keep control? Implementing Access and Data Governance for Business Analytics systems is a big challenge, and auditors frequently identify severe risks in that area – which is no surprise at all.
Another scenario is PII in the Internet. If we give some PII to some provider for some reason, how could we ensure that he doesn’t give that PII away? No way, I’d say. We might use special eMail addresses or faked information to track back some abuse of PII, but that’s not really a solution.
So what to do? Short term, it is about implementing processes which at least try to minimize Data Sprawl and the associated risk, like mentioned above. These processes and policies are far from perfect. That helps internally, but not for PII.
We might use (very) long-term technical solutions like homomorphic encryption and other technologies which are developed around the “minimal disclosure” approaches to address some of the issues. We then might use an approach like Information Rights Management which works not no a document basis but on an attribute basis. But most of these things will help us sometimes in the future, if ever.
But what about defining a policy standard which is sticky to the data? A standard which describes how data could be used? If systems support this standard, they could enforce it. That would be about having such a standard and allowing exports at least of sensitive data only to systems which support the standard and enforce the policies. If data is split up, the policy has to be sticky to all parts (as long as it applies to all parts). If data is combined, policies have to be combined – the intersection of the policies applies then.
Such an approach has limitations, because it will first of all need some people to define the standard. And, like with all standards, it is about the critical mass. On the other hand: Virtually every organization has the problem of Data Sprawl and lacks a valid answer to the questions which are asked in the context of Data and Access Governance. Thus, there is a real need for such a standard. From my perspective, the large vendors in the markets of Business Applications (e.g. ERP, CRM, and related systems), of Business Analytics, and of all the ETL and EAI applications are the ones who should work on such a standard, because they are the ones who have to support it in their systems. And they should start quickly, because their customers are increasingly under pressure from the auditors.
14.06.2011 by Martin Kuppinger
There has been a lot of FUD (Fear, Uncertainty, Doubt) regarding Control-SA. The product has been moved from BMC to SailPoint in spring 2011. But communication about the impact for customers has been weak (to use a positive term…). After several talks with both SailPoint and BMC I’d like to provide some information. First of all, SailPoint now owns Control-SA, including the support team and other related human resources. There even is a roadmap for Control-SA and support for the newer releases (ESS 7.5.x) will be provided for several years from now.
On the other hand, SailPoint IdentityIQ now is on the price list of BMC. It can be bought with BMC contracts, BMC 1st level support, and so on. It is the strategic solution for Access Governance and Identity/Access Management offered by BMC. BMC itself only focuses on BIRM (BMC Identity Request Management), not to be mixed up with BRIM (BMC Remedy Identity Management), which is no longer sold through BMC (but the relevant parts are either BIRM or SailPoint products (ex Control-SA) now.
SailPoint will soon provide its own provisioning engine, which is sort of a lightweight implementation, being controlled by the Access Governance (and Lifecycle Management) components of IdentityIQ and which uses the existing connectors of Control-SA. SailPoint additionally plans to release new connectors.
This gives customers a lot of choices to move forward. They can use Control-SA for quite a while, at least if they use ESS 7.5.x and higher. They might move to the SailPoint provisioning engine, using IdentityIQ on top and the existing connectors. They might migrate to other provisioning tools, and so on. But the most important thing is: Control-SA isn’t dead and customers can take their time to consider their options. And my advice is: take your time and think about how your IAM, Access Governance, and Service Request Management should look like in the future.
I’ve written a research note on “Access Governance Architectures” some 15 months ago. I talk about different architectural approaches for Access Governance – and many of them are relevant when rethinking your strategy and architecture around the three topics mentioned above. The most important point is: it is not about having exactly one central provisioning tool anymore. Provisioning tools are an important element, but a lot of companies struggle with standardizing on one tool. There might be tools in use for quite a while for specific environments, sometimes with a lot of customization – think about mainframe connectors. There are mergers and acquisitions, bringing in new tools. There are lobbies pushing specific solutions for the Microsoft Active Directory environment or the SAP infrastructure. There might be too complex IT infrastructures in large organizations, divided across many organization divisions.
That’s were integrating layers like Access Governance and/or Service Request Management come into play. They might become the glue for different provisioning systems. And they even enable you to easier make changes at the provisioning layer. Modular architectures are somewhat more complex architecture-wise and from the integration perspective, but they provide you more flexibility for changes.
Looking at Control-SA environments, putting such a layer on top (which might be Sailpoint IdentityIQ but could be another Access Governance tool, SRM tool, or portal as well) allows you to migrate Control-SA at your own pace to whatever you want – or to add other provisioning tools if required. This provides you the flexibility. And in most cases it is the better choice than just replacing one monolith with another one. By the way: that is true for all the other provisioning systems, which might have to be migrated at some point of time as well.
Thus: evaluate your options first. Build a future-proof architecture (as future-proof as one could be based on what is there today). Then decide on what to do with Control-SA when. This will give you more time for your decisions and you most likely will end up with a better solution. If you then end up with a pure SailPoint or a mixed SailPoint/BMC (BIRM) solution or with a mixed vendor solution or a solution purely provided by another vendor, depends on your requirements. But it should be a well-thought decision, not something done in a hurry.
10.02.2011 by Martin Kuppinger
Being involved in a lot of advisory projects at end user organizations for some years now, I’d like to share some of the fundamental changes I observe. There is always a gap between what analysts like us, KuppingerCole, predict and what is done in reality. Thus it is always great to observe that things we’ve predicted and proposed are becoming reality. So what has changed over the course of the last years – trends becoming reality:
- Access and Identity Management: Back in 2008, I’ve blogged about the relation of the terms “access” and “identity”, the latter being much more difficult to explain. Today, the clear focus is on access controls, they are in focus.
- More flexible architectures: Some time ago, the idea was to have one provisioning system which covers all. Today more flexible architectures like described in one of my research notes become reality. Access Governance on top of several provisioning system allowing to protect existing investments and to move forward in smaller steps are increasingly common – and the increased maturity of Access Governance tools is the foundation to do this. Provisioning is increasingly seen as a technology layer below such integration layers (not necessarily Access Governance). And so on…
- Access Governance on top, doing things more business centric: A consequence of this is that companies focus much more on the business user and their requests for access (yes, for access, not mainly for identities). This isn’t entirely new but the way IT interacts with business has changed over time.
- Integration with service request approaches (not service desk, like BMC believes): Another tendency is to integrate access and identity requests with other service requests, either in the IAM/Access Governance tools (like in Quest One ActiveEntry or through Avatier AIMS, to name just two) or in service catalogs. However the interface has to be fore business users, not the IT – e.g. not the service desk itself. Service desks are as well increasingly part of the integration, within the more distributed architectures mentioned above, but for the manual part of fulfillment in systems which aren’t connected through a provisioning system.
- Bodies of rules, policies,…: The, from my perspective, most important change is that more and more projects start with the definition of “bodies of rules”, policies, concepts – and not with the selection of a technology. That definitely makes sense: You don’t start building a house by buying stones, you start with blueprints.
Two more (amongst others) trends increasingly becoming reality are
- Externalization of security out of applications in a standardized way, based on XACML and other approaches (and yes, there are real world projects out there on this)
- Hybrid cloud IAM and Access Governance – how to deal with mixed environments
Overall there is a clear shift of how IAM is done. And this change will continue, with the upcoming integration of Access Governance and other IT GRC approaches into enterprise-wide GRC concepts.
To learn more about the trends as well as the best practices don’t miss EIC 2011, where thought leadership and best practices come together.