Relevance of recertification

27.10.2011 by Martin Kuppinger

In a recent briefing with CrossIdeas, the MBO of the former Engiweb, an Italian software manufacturer in the area of Access Governance and Dynamic Authorization Management, they demonstrated an interesting feature: Doing recertifications based on relevance. Recertification of access rights is a key element of regulatory compliance. This is done frequently on a pretty standardized schedule. Doing this once or twice a year is the typical approach. For some specific systems or groups of users, we frequently see that the intervals are shorter, e.g. some risk-oriented approach is not uncommon. However, cynics might say that the main purpose still is to make the auditors happy.

CrossIdeas now has implemented an approach they name “relevance”. Based on several criteria like the number of SoD violations, the system identifies the most relevant users for recertification. Currently it supports six different parameters. The weight of these parameters can be easily changed using sliders. The least relevant users then can be removed – again using a slider – from the result set (a relevance map), leaving only the relevant ones in there. Then recertification can focus specifically on them.

This feature isn’t a full replacement for standard, regular recertification campaigns (which are supported by CrossIdeas IDEAS – the latter the name of their product) as well. Relevance is, from my perspective, a nice concept which brings value to customers because they can easily implement focused recertification campaigns for the most relevant users in addition to standard recertification. That then not only makes the auditor happy, but helps in better mitigating access risks. Not that standard recertification doesn’t help – but there is room for improvement and CrossIdeas has demonstrated an approach to do that which will be available in the new release due later this year.


Posted in Access Governance, GRC, Risk Management | Comments Off

The UBS case: Again 2 billion US$ lost due to unauthorized transactions of a trader

15.09.2011 by Martin Kuppinger

Today, the next story about banks failing in managing trading risks hit the news. It remains unclear what allowed the trader to execute unauthorized (and thus most likely illegal) transactions which lead to that loss. However, the Risk Management of UBS obviously failed. By the way: UBS had to annouce that just the day the swiss parlament started a debate about new regulations for the finance industry.

It will be interesting to hear about why that could happen. Did some people co-operate? Did the risk management system specifically for that types of transactions fail? Or has it been an Access Management problem like at SocGen some time ago, where the trader was able to control himself? Whatever the reason is, the incident proves that there is still a long way to go in Risk Management and overall GRC – not only in the finance industry.

GRC is a key task for the C-level management. It needs sufficient funding. It needs support for the organizational changes, to build an organization with a high degree of process maturity and the understanding of the GRC requirements. It needs a strategic approach to integrate Business and IT to optimally support GRC, given that most business relies on IT systems and fraud in these systems causes the most severe harm. It needs an organizational and an IT-architectural approach to be able to manage different regulations and all types of risks in a structured and efficient way.

For the ones thinking about how to move forward in GRC, today’s KuppingerCole webinar might be worth to attend. It won’t answer all questions, but it will provide some valuable hints for moving forward in GRC. For sure, this is a long journey. But I strongly believe that it is feasible to avoid incidents like the one which happened now at UBS – and to mitigate the overall risks for organizations by a strategic GRC initiative (instead of point solutions).


Why you should focus on the infrastructure layer

21.04.2011 by Martin Kuppinger

In these days of slowly increasing maturity of Cloud Computing it becomes more and more obvious that and why IT depends on a well thought layer which I tend to simply call “infrastructure”. I have two simple pictures of IT in mind:

  • The somewhat classical model of platform, infrastructure, and software, like found in PaaS, IaaS, and SaaS in the common Cloud Computing meta models. It’s about hardware and other foundational components like operating systems, about the layer between to manage and orchestrate everything, and the applications themselves.
  • Another view consists as well of three layers. The services exposed to the users (i.e. in most cases the business) on top, the service production (either in the public cloud or a private cloud or in non-cloudified IT environments) at the bottom – and a layer in between which again is used for managing and orchestrating everything. Again, this layer might best be called “infrastructure”.

This layer is which connects everything. Thus, efficiency and effectivity of this layers are the foundation of efficiency and effectivity of the entire IT. Optimizing this layer allows to better connect the available services to the business demands. It allows to manage the different layers in the cloud.

When looking at that layer, there are some few key elements:

  • Service Management, e.g. the entire area of procurement, service request management, accounting, availability, performance, and whatever it requires to ensure that the services are delivered as expected
  • Information Security Management, including IAM (Identity and Access Management) and at least IT GRC (Governance, Risk Management, Compliance)
  • Application Infrastructures, e.g. middleware allowing to connect services, to enhance them if required and to do the orchestration

Did I miss important elements? OK, there is the classical IT security, however that’s part of Information Security – the reason we are looking at IT security is to protect information. You might add some other elements, however I tend to keep this model simple.

To me it appears to be more important to look at the dependencies of the three services. Information Security and Service Management have to work hand in hand, to ensure that access to services is restricted and controlled. Applications and Information Security are tightly related – think about how to build secure apps. And applications are, at the end of the day, nothing else than services which have to be managed.

I personally believe that starting with such a model and outlining the blueprint for your future IT definitely helps in separating the important from the less important things and to focus on building an IT ecosystem in your organization which is stable and works with whatever you plan to do in the Cloud.

See you at EIC 2011 in Munich, May 10th to 13th.


We need a policy standard for the use of data

10.03.2011 by Martin Kuppinger

One of the issues I’m confronted with in most of the advisories I’m doing is “how to protect information once it leaves a system”. A typical situation is that HR data leaves the HR system and is imported in another system – the identity provisioning system, a business analytics tool, or whatever else. Once information is out of HR, it is out of control. Lost somewhere in the happy hunting grounds of information…

However, from a governance perspective (and deu to many specific regulations) we have to keep control. PII has to be well managed, financial data has to be well managed, risk related information as well – and the same is true for many other types of information.

What can we do to address this? The first step is to have a governance process in place for the export/import interfaces. There needs to be a defined process to decide about which information is allowed to be exported for use in other systems. Knowing the paths data can take is an important step; having processes, approvals, and documentation in place definitely helps. Limiting the paths by using fewer systems to distribute information helps as well. You might have 100 interfaces between your HR (or whatever system you are currently looking at) and the target systems or one to a hub, which might be an identity provisioning tool or an ETL platform (Extract, Transform, Load) or something else.

However, these all are limited solutions. They don’t solve what happens with data in the target systems. For sure you can expand the processes and policies and try to ensure that the information is handled appropriately in the targets but that is a pretty hard task.

Soem time ago I discussed with Kim Cameron (Microsoft) about protecting PII which a user issues – how can the user keep control over “his” PII? Once he hands this over to a system he is out of control. We might apply techniques derived from IRM (Information Rights Management) to this data; however, that would be a big beast to tame with all the encryption, user management, and policies involved. Kim brought in his idea of using policies which are somewhat “sticky” to the PII and enforced by target systems.

For the foreseeable time this is probably the only approach which will really work, for any type of information flowing: Having a policy standard which defines the rules to use this data. The policy has to flow with the data and it has to be enforced by the target systems. It could include rules covering aspects like the sensitivy of information (PII, financial,…), whether it can be exported again, specific protection requirements, limitations for access (“only users with clearance …”), and so on. When information out of different sources is combined in a target systems, a resulting policy has to be calculated which again is sticky for that data.

That might sound a little complex but it isn’t really that difficult to implement, from my perspective. Is there any chance for success? I think so. Once a standard for such policies is defined and implemented by some of the larger vendors, it will become a decision criteria for information systems: Do they support this standard so that I’m better able to fulfill my governance and compliance requirements? Such a standard will help in achieving data governance and in supporting data owners (and many other players…)

Like with any standard, that requires large vendors to participate and to push the standard. Given that most large vendors are facing these issues, participation should be a no-brainer.

I’m very open to engage in starting such an effort – bringing the participants together, hand over to a standardization initiative, providing my thoughts as initial feeds, and helping in dissemination. Thus when you are interested to particiapte – just send me an eMail. Once I have sufficient initial support I’ll try to get the remaining big vendors in.

And, by the way: Don’t forget to register for EIC 2011 and Cloud 2011 - early bird will end soon! And maybe we can actively start this initiative there.


Posted in Database Security, GRC, Security | Comments Off

From technology to business – the shift in Identity and Access Management

10.02.2011 by Martin Kuppinger

Being involved in a lot of advisory projects at end user organizations for some years now, I’d like to share some of the fundamental changes I observe. There is always a gap between what analysts like us, KuppingerCole, predict and what is done in reality. Thus it is always great to observe that things we’ve predicted and proposed are becoming reality. So what has changed over the course of the last years – trends becoming reality:

  • Access and Identity Management: Back in 2008, I’ve blogged about the relation of the terms “access” and “identity”, the latter being much more difficult to explain. Today, the clear focus is on access controls, they are in focus.
  • More flexible architectures: Some time ago, the idea was to have one provisioning system which covers all. Today more flexible architectures like described in one of my research notes become reality. Access Governance on top of several provisioning system allowing to protect existing investments and to move forward in smaller steps are increasingly common – and the increased maturity of Access Governance tools is the foundation to do this. Provisioning is increasingly seen as a technology layer below such integration layers (not necessarily Access Governance). And so on…
  • Access Governance on top, doing things more business centric: A consequence of this is that companies focus much more on the business user and their requests for access (yes, for access, not mainly for identities). This isn’t entirely new but the way IT interacts with business has changed over time.
  • Integration with service request approaches (not service desk, like BMC believes): Another tendency is to integrate access and identity requests with other service requests, either in the IAM/Access Governance tools (like in Quest One ActiveEntry or through Avatier AIMS, to name just two) or in service catalogs. However the interface has to be fore business users, not the IT – e.g. not the service desk itself. Service desks are as well increasingly part of the integration, within the more distributed architectures mentioned above, but for the manual part of fulfillment in systems which aren’t connected through a provisioning system.
  • Bodies of rules, policies,…: The, from my perspective, most important change is that more and more projects start with the definition of  “bodies of rules”, policies, concepts – and not with the selection of a technology. That definitely makes sense: You don’t start building a house by buying stones, you start with blueprints.

Two more (amongst others) trends increasingly becoming reality are

  • Externalization of security out of applications in a standardized way, based on XACML and other approaches (and yes, there are real world projects out there on this)
  • Hybrid cloud IAM and Access Governance – how to deal with mixed environments

Overall there is a clear shift of how IAM is done. And this change will continue, with the upcoming integration of Access Governance and other IT GRC approaches into enterprise-wide GRC concepts.

To learn more about the trends as well as the best practices don’t miss EIC 2011, where thought leadership and best practices come together.


Virtualization vs. Security

27.01.2011 by Martin Kuppinger

Some days ago, a vendor talked at an analyst meeting about the relationship between virtualization and security. The argument was: At the hypervisor you can combine network security management, server security management and some other aspects of security management – I can’t remember everything. Thus virtualization increases security, because you have one point of control.

Right – as long as you can control what administrators and operators are doing. Unfortunately, that’s not the case in typical virtualization environments. There is no PxM (Privileged Access, Account, Identity, User) Management at all. And in that case, combining everything is a problem, a nightmare from a compliance point-of-view. For sure there is a value in having a single point-of-control, but only if you are able to adequatly control use of this.

I’ve asked the speaker about the solutions around PxM offered by that vendor – there weren’t any.

Without specific virtualization security solutions, PxM being one very important amongst them, there is a virtualization security risk. There is a potential of increasing security by using adequate technology, which is provided by several vendors. But claiming that there is a value of combining a lot of highly elevated administrative actions without being able to manage them doesn’t make any sense.

For a comprehensive overview on what customers expect around virtualization security just have a look at that survey.

And don’t forget to register for EIC 2011 and Cloud 2011.


Lessons enterprises should learn from the recent wiki-leak

17.12.2010 by Martin Kuppinger

There has been a lot of discussion around Wikileaks publishing an incredible amount of data which has been classified as confidential by the US Government. I don’t want to discuss this from specifically – many people have done this before, with fundamentally different conclusions. More interesting is what this means for private organizations, especially enterprises. Wikileaks has threatened some of them: The russian oligopolies, the finance industry in general. That comes to no surprise. Wikileaks founder Assange rates them as “bad”,e.g. his enemies. Given that Wikileaks isn’t alone out there, there is an obvious threat to any enterprise. Some might think that construction plans of the defense industry should be published. Others might think that should be done with blueprints from the automotive industry after claimed incidents. Or with the cost accounting of the utilities if power or gas appears to be too expensive. I don’t want to judge about the reasons – I have my personal opinion on this but that’s out of the scope of this post.

Looking at that situation from an enterprise perspective, it becomes obvious that information security has to move to the top of the CIO agenda (and the CEO agenda!) if it isn’t yet there (and given that the enterprise isn’t willing to share everything with the public – blueprints, calculations, whatever,…). That requires approaches which are somewhat more fine-grain than the once which obviously have been in place in the US government, allowing a private (or something like that, I’n not that familiar with the ranks in the US military) to access masses of documents. It also requires to efficiently protect the information itself instead of the information system only. Information tends to flow and once it is out of the system the system-level security doesn’t grip anymore.

That leads inevitably to the topic of Information Rights Management (IRM) which is a frequent topic in the blogs of Sachar Paulus and me – just have a look at our blogs. However, implementing IRM the typical way in organizations requires using centralized policies, classifications, and so on. And classification obviously failed in the last Wikileaks incident. Thus, I’d like to bring in an idea Baber Amin recently brought up in a discussion during a KuppingerCole webinar. He talked about “identity-based encryption” which in fact means encrypting it in a way which is controlled by the single user. That leads to an IRM where the single user controls who is allowed to use information he creates or owns. It is not (mainly) the organization.

But: Will that work? Some arguments and counter arguments:

  1. Information is not accessible once the user leaves the organization: Not correct, there might be an additional “master” key to allow recovery and so on. Many lessons could be learned from Lotus Notes in that area, to name an example.
  2. There are no corporate policies: Not correct, these could be understood as a second level of protection, adding to the first level managed by the user. E.g. classical IRM and personalized IRM could be combined.
  3. It won’t work because the user doesn’t understand what to do: Not correct. Just look at how users are dealing with information security in their daily live. For sure some things are going wrong and lessons have to be learned (not to appear drunken on a photo in Facebook, for example), but overall that works pretty well. Combined with the corporate policies, that should turn out to be much better than corporate policies only. Trust the employee and the wisdom of crowds.

Simply spoken: Think about doing it different than before. It is not about adding new tools at the (perforated) perimeter and all these point solutions. It is about building few consistent lines of defense, including and especially the next-generation IRM. For sure there is some way to go and tools aren’t there yet. But when thinking about how to protect your intellectual properties and the secrets your organizations wants to have (for whatever reason – I don’t judge here…), you should definitely think beyond the traditional approaches of IT security – look especially at Information Security instead of Technology Security, e.g. the I and not the T in IT.

When you think that this topic is worth to think about, you shouldn’t miss EIC 2011 - the conference on IAM, GRC, Cloud Security and thus also about things discussed in this post. And don’t hesitate to ask for our advisory services ;-)


Security questions for authentication – a ticking privacy time bomb?

30.09.2010 by Martin Kuppinger

We all are familiar with external (and sometimes also internal) websites which require us to pick or define security questions and to provide answer to these questions. What is your mother’s maiden name? Which is your favourite sports team? Which is the color you like most? And so on… These questions are sometimes used as additional means for authentication, for example by PayPal. More frequently they are used for password resets.

These days, when working with my colleagues Sachar Paulus and Sebastian Rohr on a comprehensive piece on strong authentication which will be published soon, we discussed the privacy aspects of all these (more or less strong) authentication approaches – and struggled… The answers on all the typical questions are privacy-relevant data. They unveil some important knowledge about the user. The more questions, the more knowledge. You could argue that this isn’t that sensitive information – but first of all, it is personal data and second, this depends on the questions.

But have you ever seen something around privacy-related disclaimers, buttons to accept the privacy policies of the organization or something like that around these questions? I can’t remember that. That leads to the assumption that probably few people ever have thought about the privacy aspect of these questions – which means that the relevant compliance regulations just have been ignored.

From our perspective, organizations should check where they use such questions and whether they are in sync with the compliance regulations they have to meet. Otherwise such a simple mechanism might become a real issue from the legal perspective.

The website for the European Identity Conference 2011, to be held May 2011 in Munich, is online now.


IBM acquires OpenPages – and proves our GRC vision

16.09.2010 by Martin Kuppinger

It is always nice when trends an analyst has predicted become reality. I’ve been talking and blogging a pretty long time about the need for an integrated GRC approach, especially beyond the isolated “Enterprise GRC” with little automation. Yesterday, IBM announced that they agreed to acquire OpenPages, one of the most prominent vendors in the Enterprise GRC space. That isn’t really a surprise, given that IBM is investing in the GRC market for quite a while. The really interesting parts in the presentation given by IBM on this acquisition yesterday are the parts where the Enterprise GRC layer of OpenPages becomes integrated with the IT GRC tools of IBM, as well Business Analytics as many Tivoli tools. With other words: It is about integrating different layers of GRC to provide a more complete and current (through automation) view on the controls.

That fits well into our expectations as well as to the KuppingerCole GRC Reference Architecture. Successful GRC is based on a mix of manual and automated controls. I remember a conversation with the OpenPages executives where they in fact denied the need for such an integration. Right now, becoming a part of IBM, that seems to change fundamental, because the IBM strategy is about this integration, with a strong layer on top for the executive view.

While some vendors like MetricStream are pushing this approach and others like RSA/EMC with their Archer acquisition in January 2010 have the same potential, it will be very interesting to observe how other “Enterprise GRC” vendors (I still believe that this is an arrogant term as long as these solutions ignore most parts of the enterprise and are mainly a high-level solution focused on manual controls with little integration into the different other GRC layers) will react. With the IBM acquisition of OpenPages, the time where a vendor can ignore the integration of GRC at all levels are past. Thus, this acquisition will heavily influence the overall GRC market and some of the more prominent “Enterprise GRC” players might end up at the loser’s streak.


BAM brought to reality

02.07.2010 by Martin Kuppinger

Do you remember the term BAM? BAM is an acronym for Business Activity Monitoring. It was a hype topic in the early 2000’s. And then we didn’t hear that much anymore about this topic. Yes, there are several vendors out there, providing different types of solutions. And like always, there are several vendors who claim to be the leaders in the category of BAM.

When BAM became a hot topic some 10 years ago, the implementations were nothing else than a little advanced analytics. That was, at that point of time, far away from my expectations which were around intelligent, automated, real-time and ex-post analysis of relevant activities in business systems and the identification of critical changes which require intervention. For sure automated reactions as well as alerting should be part of this.

The term BAM came to my attention again when talking with MetricStream recently. MetricStream is one of the leading-edge vendors in the GRC market. They are one of the “Enterprise GRC” vendors (Business GRC would be the better term). But in contrast to many others, they allow for a tight integration with IT systems and IT controls. Based on that, they are able to use automated controls of virtually any type and map this into their system. That in fact allows to integrate what I had expected from BAM years before with a holistic GRC approach. By the way: MetricStream has a pretty high rank on my list of GRC vendors…

When looking at the BAM market I have to admit that there has been evolution since the early years of BAM. There is much more automation than pure analytics, there are several interesting solutions out there. However, MetricStream is somewhat unique with enabling this (without talking about BAM) in the context of Business GRC and thus allowing to add this as a generic approach into what every organization has to do today: Building a GRC infrastructure, with manual and automated controls – where automated controls should provide what BAM has been promising.

I assume that several of you have another opinion – so I’m looking forward for your comments.


Posted in CIO agenda, GRC | Comments Off
Services
© 2014 Martin Kuppinger, KuppingerCole