SailPoint and BMC – how to move forward?

14.06.2011 by Martin Kuppinger

There has been a lot of FUD (Fear, Uncertainty, Doubt) regarding Control-SA. The product has been moved from BMC to SailPoint in spring 2011. But communication about the impact for customers has been weak (to use a positive term…). After several talks with both SailPoint and BMC I’d like to provide some information. First of all, SailPoint now owns Control-SA, including the support team and other related human resources. There even is a roadmap for Control-SA and support for the newer releases (ESS 7.5.x) will be provided for several years from now.

On the other hand, SailPoint IdentityIQ now is on the price list of BMC. It can be bought with BMC contracts, BMC 1st level support, and so on. It is the strategic solution for Access Governance and Identity/Access Management offered by BMC. BMC itself only focuses on BIRM (BMC Identity Request Management), not to be mixed up with BRIM (BMC Remedy Identity Management), which is no longer sold through BMC (but the relevant parts are either BIRM or SailPoint products (ex Control-SA) now.

SailPoint will soon provide its own provisioning engine, which is sort of a lightweight implementation, being controlled by the Access Governance (and Lifecycle Management) components of IdentityIQ and which uses the existing connectors of Control-SA. SailPoint additionally plans to release new connectors.

This gives customers a lot of choices to move forward. They can use Control-SA for quite a while, at least if they use ESS 7.5.x and higher. They might move to the SailPoint provisioning engine, using IdentityIQ on top and the existing connectors. They might migrate to other provisioning tools, and so on. But the most important thing is: Control-SA isn’t dead and customers can take their time to consider their options. And my advice is: take your time and think about how your IAM, Access Governance, and Service Request Management should look like in the future.

I’ve written a research note on “Access Governance Architectures” some 15 months ago. I talk about different architectural approaches for Access Governance – and many of them are relevant when rethinking your strategy and architecture around the three topics mentioned above. The most important point is: it is not about having exactly one central provisioning tool anymore. Provisioning tools are an important element, but a lot of companies struggle with standardizing on one tool. There might be tools in use for quite a while for specific environments, sometimes with a lot of customization – think about mainframe connectors. There are mergers and acquisitions, bringing in new tools. There are lobbies pushing specific solutions for the Microsoft Active Directory environment or the SAP infrastructure. There might be too complex IT infrastructures in large organizations, divided across many organization divisions.

That’s were integrating layers like Access Governance and/or Service Request Management come into play. They might become the glue for different provisioning systems. And they even enable you to easier make changes at the provisioning layer. Modular architectures are somewhat more complex architecture-wise and from the integration perspective, but they provide you more flexibility for changes.

Looking at Control-SA environments, putting such a layer on top (which might be Sailpoint IdentityIQ but could be another Access Governance tool, SRM tool, or portal as well) allows you to migrate Control-SA at your own pace to whatever you want – or to add other provisioning tools if required. This provides you the flexibility. And in most cases it is the better choice than just replacing one monolith with another one. By the way: that is true for all the other provisioning systems, which might have to be migrated at some point of time as well.

Thus: evaluate your options first. Build a future-proof architecture (as future-proof as one could be based on what is there today). Then decide on what to do with Control-SA when. This will give you more time for your decisions and you most likely will end up with a better solution. If you then end up with a pure SailPoint or a mixed SailPoint/BMC (BIRM) solution or with a mixed vendor solution or a solution purely provided by another vendor, depends on your requirements. But it should be a well-thought decision, not something done in a hurry.


Why you should focus on the infrastructure layer

21.04.2011 by Martin Kuppinger

In these days of slowly increasing maturity of Cloud Computing it becomes more and more obvious that and why IT depends on a well thought layer which I tend to simply call “infrastructure”. I have two simple pictures of IT in mind:

  • The somewhat classical model of platform, infrastructure, and software, like found in PaaS, IaaS, and SaaS in the common Cloud Computing meta models. It’s about hardware and other foundational components like operating systems, about the layer between to manage and orchestrate everything, and the applications themselves.
  • Another view consists as well of three layers. The services exposed to the users (i.e. in most cases the business) on top, the service production (either in the public cloud or a private cloud or in non-cloudified IT environments) at the bottom – and a layer in between which again is used for managing and orchestrating everything. Again, this layer might best be called “infrastructure”.

This layer is which connects everything. Thus, efficiency and effectivity of this layers are the foundation of efficiency and effectivity of the entire IT. Optimizing this layer allows to better connect the available services to the business demands. It allows to manage the different layers in the cloud.

When looking at that layer, there are some few key elements:

  • Service Management, e.g. the entire area of procurement, service request management, accounting, availability, performance, and whatever it requires to ensure that the services are delivered as expected
  • Information Security Management, including IAM (Identity and Access Management) and at least IT GRC (Governance, Risk Management, Compliance)
  • Application Infrastructures, e.g. middleware allowing to connect services, to enhance them if required and to do the orchestration

Did I miss important elements? OK, there is the classical IT security, however that’s part of Information Security – the reason we are looking at IT security is to protect information. You might add some other elements, however I tend to keep this model simple.

To me it appears to be more important to look at the dependencies of the three services. Information Security and Service Management have to work hand in hand, to ensure that access to services is restricted and controlled. Applications and Information Security are tightly related – think about how to build secure apps. And applications are, at the end of the day, nothing else than services which have to be managed.

I personally believe that starting with such a model and outlining the blueprint for your future IT definitely helps in separating the important from the less important things and to focus on building an IT ecosystem in your organization which is stable and works with whatever you plan to do in the Cloud.

See you at EIC 2011 in Munich, May 10th to 13th.


What I would expect from capacity MANAGEMENT

24.11.2010 by Martin Kuppinger

These days I talked with one vendor about his news around capacity management. Capacity management is used as a term to describe products which are used for optimizing workloads in data centers and to make the best use of the resources within these data centers. Honestly, I didn’t do a deep dive into that specific area before, and maybe I’m a green on advanced IT topics – but when vendors are promising capacity management, I’d expect that to be somewhat active. However, I’ve learned that it’s about reporting and analytics – even in association with terms like dynamic, continuous, proactive or whatever. I’ve checked several other offerings in the meantime and it appears to be common understanding amongst these vendors, that management ends with some nice reports in the space of capacity management.

I would have used the terms capacity reporting or capacity analytics instead, but I know that vendor marketing tends to “beautify” their stories a little.

So what are my expectations on capacity management? The issue is obvious – it is complex to manage the workloads in the increasingly virtualized and complex data centers. And it is becoming even more complex when we have different “clouds”, private and public, where we might put these workloads. Thus we need to support that we can optimize this – a dynamic or continuous capacity management is key. However, I’d at least expect to have a little more than just the information that there is something to be optimized. To achieve a fully automated management for sure will be a long journey, requiring a strong and valid policy model to ensure that nothing goes wrong. It will also require a tight integration with Business Service Management to ensure that the specifics of business processes like high workloads at the end of the month/year are met. It will even require that financial aspects (costs of internal and external cloud resources) and governance aspects (which data and workloads is allowed to be placed where due to governance/compliance/security constraints?) are met. In the meantime, automatically created “jobs” for the virtualization, server, and application management tools to easily make the proposed changes reality would be a first step.

I’m looking forward to see when capacity management becomes reality. There is definitely a value in today’s solutions. But the value could me significantly bigger than it is.


Finally: Novell is sold

23.11.2010 by Martin Kuppinger

I’m following Novell for more than 20 years right now. And for roughly the same period of time there have been rumours of other companies acquiring Novell. But it never happened. Not really, at least. You could argue that the acquisition of Cambridge Technology Partners was sort of a takeover of Novell by Cambridge, with Jack Messman becoming CEO and so on. But at the end, Novell was at its own again. But yesterday the news spread that Attachmate is buying Novell – finally they are sold. Attachmate will keep Novell as separate business unit and maintain the brands of Novell and Suse. With other words: There won’t be that many changes from a customer perspective at first glance.

When looking at Attachmate and NetIQ, it becomes obvious that Attachmate at that point of time is keeping the acquisitions somewhat separate. There is still a NetIQ website and the NetIQ brand is still maintained. Behind the scenes, there is integration – but not when facing to the customer. It is most likely that the same strategy will be followed with Novell.

However, the questions are whether, when, and how Attachmate will start to build on the potential of tighter integration between their different “divisions”, e.g. the classical Attachmate, NetIQ, and Novell. There is a significant potential for integration – look at the broad support for different environments, from the mainframe to NetWare, Linux, and Windows. Look at the expanded capabilities for managing networks, delivered by NetIQ and Novell. And think about what the outcome for “intelligent workload management”, e.g. the optimization and management of workloads in virtualized/cloud environments could be if all the strengths of Attachmate, NetIQ, and Novell are put together. Thus, there is some interesting potential for the future.

The question I have fully answered is: What does this mean for existing Novell customers and what should they do? The answer at that point of time is simple: Stay calm and proceed as planned. There is no reason to go away from Novell – in contrast: Novell is now part of a significantly larger organization and it finally has been acquired, thus the rumours around acquisitions are past. And the opportunities out of this acquisition for existing Novell customers are significantly greater than the risks – especially if Attachmate starts to leverage the potential synergies between the different companies within that conglomerate.


Cloud Computing is mainly Service Management

28.10.2010 by Martin Kuppinger

When looking at all the discussions around the “cloud” I still miss some focus on the real essentials of a strategic (!) approach for using clouds. Clouds are, when looking at the right now common understanding of private, hybrid, and public clouds, in fact nothing else than IT environments which produce IT services. These services are provided at many different layers, like in the common (and pretty coarse grain) segmentation into SaaS, PaaS, and IaaS. But: It is about the (efficient, scalable,…) production of standardized, reusable services.

Cloud Computing is about using these services. It is about procurement, management, orchestration, accounting, and so on. With other words: Cloud Computing is mainly about service management, in a standardized way. In a perfect world, all services of all products (internal and external) would be managed consistently. There could be one consistent accounting, ending up with something like an ERP for IT. However, the service management aspect of Cloud Computing appears not to be in the centre of most discussions around Cloud Computing. Many discussions are just about tactical comparisons and views of parts of Cloud Computing. Many discussions are around security. But about service management, the really strategic thing? The part which will fundamentally change the way we are doing IT?

For sure there is a lot of discussion around service management today. ITIL is a good example. However, that covers just a part of IT. We have to look at it from the highest layer (business and its requirements, described as real business services like “managing contracts of type … in compliance with regulations and…”) down to granular web services used in SOA architectures. Services are sort of everywhere. And the future of IT is about having two layers:

  • Service production (In the Clouds)
  • Service consumption (Cloud Computing)

That requires fundamental changes in IT organizations. The core competency is to become best in class in mapping business requirements to the required services, e.g. in doing the “cloud computing” part right. For the “production” part of IT, it is about becoming best in class in providing efficient services. But typical IT organizations will be split into two parts: Consumption/Orchestration/Management and so on – and production in the private cloud environment. Enabling this shift is the key issue for any organization today.

You might now argue “what about security?”. Pretty easy: Security is a part of this. Every service has a functional part and a “governance” part: Where is the service allowed to run due to compliance? What about encryption of transport and data? Who is allowed to access the service (or parts of it)? And so on… With other words: When you’ve solved the service management piece, you’ve automatically solved at least a large portion of the security piece. You might argue that there are some infrastructural aspects not covered by this (how to enforce what you need for service governance). But that could be understood as well as part of your service environment.

A lot of aspects around Clouds, Cloud Computing, Cloud and Services, Cloud Security and so on will be discussed at EIC 2011/Cloud 2011 in Munich, May 10th to 13th.


© 2015 Martin Kuppinger, KuppingerCole