SDN is NOT the next big thing. SDCI is.

13.12.2013 by Martin Kuppinger

I have read many predictions recently that SDN (Software Defined Networking) is the next big thing in IT. Wrong. It is not. It is just a small piece in a bigger story. And just looking at SDN is not sufficient.

The next big thing is SDCI – Software Defined Computing Infrastructure. This is about “software-defining” everything. Hardware virtualization – “software defining hardware”, so to speak – is a reality. Software Defined Storage is becoming increasingly popular. SDN is another element. A number of vendors, such as VMware, talk about a Software Defined Cloud Datacenter. I don’t like that term, because the “Cloud” element might be nice from a marketing perspective, but tends to narrow things down to a specific form of Computing Infrastructure. So I will use SDCI for now.

When looking at SDCI versus SDN, claiming that SDN is the next big thing is like inventing a locomotive but no rail infrastructure. It is only about solving a portion of the problem, from a technical, network-centric view.

However, SDCI is far more than that. It is about managing how business services are operated on a flexible computing infrastructure, which must include all elements of this infrastructure. It is about defining the policies for the entire infrastructure. This is an interesting challenge, because it is not about network, storage or other technical policies anymore, but about translating the business policies. Regulatory compliance, security requirements, availability, performance, but also the willingness of business to pay for a certain level of service – all that flows into policies that define how infrastructure is used and how to balance various requirements.

SDCI also will revolutionize security, in particular network security. In dynamic environments, there is no place for traditional firewalls anymore, but there are fantastic new opportunities for securing information. Such infrastructures allow us to manage security consistently across “machines”, storage, and network, in the context of business policies and in the context of identities. Instead of having multiple disparate approaches to security – a little bit of firewall here, a little bit of endpoint security here, some IAM there, etc. – we are heading towards integrated security. This integrated security still can be layered, but it will be layered in an integrated manner, unlike “layered” security today, which means using multiple levels of disparate security where no one really knows how good the combined result really is – just because there is no integration, no consistent view, no consistent policies.

The security aspect is another reason why SDNs for themselves are not what we need. SDNs are about continuing segregation. They allow us to repeat mistakes on a higher level. SDCI allows us to do things better. That’s the reason why SDCI is the next big thing – and it will become a real big thing.


CeBIT – Shareconomy without connectivity?

06.03.2013 by Martin Kuppinger

Yesterday I spent a day at the CeBIT fair, still the world’s largest IT fair. Besides the many interesting meetings I had previously scheduled, I started thinking about the CeBIT “Leitthema” – their “claim of the year”. This year it has been “Shareconomy”. I still do not know what this term shall mean. There is some fuzzy description at the CeBIT homepage, but in contrast to topics like “Cloud” and “Managing Trust” in 2011 and 2012 respectively, Shareconomy – described as “sharing and using information, resources and experience based on new forms of collaboration” – is a very amorphous concept. They then try to associate it with crowd sourcing, smart infrastructures and smart grids, data security, big data, etc.

In fact, I think that there is something behind this rather strange buzzword. Back in September 2012, KuppingerCole hosted an event about the 4Cs: Communication, Collaboration, Content, and Cloud, which was about enabling new ways of collaboration and communication in a secure way. That probably is what the Shareconomy is all about.

When I look at our advisory business, I see another red-hot topic. In German I’d call it “Umgang mit Dritten”, i.e. how to interact with third parties and services provided by these in a consistent, standardized way. That is about Cloud Security, Identity Federation, API Economy and security therein, etc. Opening up the perimeter and supporting business processes that integrate business partners, customers, etc. is highly important. So maybe that is also part of the Shareconomy. For sure, you will be able to learn a lot about this at our upcoming EIC – the real stuff, not the marketing buzz and fuzz. To highlight just some few sessions:

However, the thing that confused me most at CeBIT – in the context of their Shareconomy claim – was the lack of free WiFi. Sharing without connectivity? Or at least sharing without free or affordable connectivity? Will that work? I doubt it. I used my UMTS cards in the notebook and iPad respectively, because I otherwise would have had to pay 30 € for a 4-hour WiFi pass. That is far more even than in the old school hotels that still charge for WiFi. Ridiculous.


A totally unsurprising proposal for European cloud initiatives

23.11.2011 by Martin Kuppinger

Today I received a press release of SAP talking about a new study of Roland Berger (a large consulting firm) and SAP with the title “Cloud Computing brings new growth opportunities to – Europe’s IT and communications industry”. It ends with a program consisting of five points, the proposal of Roland Berger and SAP. The points are

  • Define a European legal framework for data protection and data security
  • Define a “European Cloud Gold Standard” as sort of certificate for cloud providers
  • Spend EU money for research and development around innovations in the cloud
  • Support Cloud Computing for medium-sized businesses (based on the already existing programs)
  • Public services and governments should procure cloud services, showing their trust into the cloud offerings (and thus drive others to procure such services as well)

There were some other well-known findings like the fact that IaaS is dominated by some large US companies like Amazon and Microsoft, that the Cloud Computing market will grow, and so on.

But, to be honest: All this is self-evident or already existing. And some aspects are questionnable. Yes, there are growth potentials in the cloud. For all IT providers in all regions worldwide. We all know that for years. There is an existing European legal framework for data protection, at least to some extent. There is room for improvement, but it isn’t missing (and the report claims that EU standards for data protection are missing, which is just wrong). A worldwide accepted standard for cloud services is required – good and strong certifications. But the question isn’t about that we need it but about how this could look like and how this could be granular enough for quick and efficient procurement processes. Innovation: Great thing. The EU is investing in that for years. And yes, it helps, but innovation comes from a lot of different sources.

When looking at the last two points, the medium-sized businesses and the government procuring cloud services, this shows a fundamental misunderstanding: Cloud Computing is just another deployment model. The reason to move to a cloud service is that this might be the most appropriate way to procure a service (instead of on-premise production). But there is no reason for the government to move to the cloud as long as these services aren’t better than on-premise services – better in a complex sense, taking all factors like functionality, price, risk ratings, security, availability, and so on into account. The same is true for medium-sized businesses.

At the end of the day it is about providing services to the customer which are good enough. Certifications, standards, and legal frameworks will help. But the main point still is about providing the better service, not about complaining about the limiting factors. By the way: It would also have been worth to mention that the EC Privacy Directive will undergo significant changes next year, beyond what it provides today around data protection.


SAML, SCIM – and what about authorization?

16.11.2011 by Martin Kuppinger

Cloud Computing is just another delivery model for IT services. However, due to the specifics of cloud services like multi-tenancy and many others, requirements sometimes are even higher than for on-premise services. One of these requirements in well-architected IT environments and for well-architected applications is the ability to externalize security. That includes relying on external directories for administering and authenticating users, e.g. on Identity Providers. It might include the capability of “cloud provisioning”, e.g. receiving changes of users – even while I clearly favor federation as loosely coupled approach over provisioning. It should include the support for external logs, event monitoring, and so on – unfortunately that appears to be a topic where noone is really working on.

And it should include the capability of managing authorizations in cloud services based on centrally (on-premise or using a cloud service – but centrally and not per cloud service!) managed policies. There is limited value in federating users and than doing all the administration work per cloud service using the cloud service’s proprietary management GUIs or APIs. However, authorization is where the problem really starts.

There is a standard for distributed, dynamic authorization management out there: XACML, the eXtensible Access Control Markup Language. It allows to describe the rules. It allows to work with different repositories for identity information (PIPs, Policy Information Points) and other information required for authorizations, it provides interfaces to custom and standard applications, and so on. However, I haven’t seen XACML in the cloud until now. Unfortunately, I also haven’t seen any real alternative to XACML.

Some might claim that SAML might do that job. There is the SAML Authorization Decision Query as part of the SAML 2.0 standard. But that leads pretty quickly to SAML/XACML interoperability and things like the SAML 2.0 profile of XACML. In fact, if it is about having a consistent set of policies expressed in a common standard, XACML is what we need. We need to define and manage these policies consistently per organization, not per service. Services should request authorization decisions – at least in an ideal world. However, when looking at the cloud, there comes another aspect into play: Performance. Performance is a general issue when externalizing authorization decisions. For cloud services which have to ask many different authorization “engines”, it is an even bigger issue. And there is the issue of latency, which is a factor in cloud environments due to the geographical distances you might find there.

Thus, while XACML is fine for defining policies, the interesting question is: Should cloud services ask external authorization engines per authorization decision? Or is it the better way to update the relevant XACML policies at the cloud service and do authorization decisions there? However, then we will still need a way for efficiently accessing the PIPs for other attributes required to perform the authorization decision.

I don’t have the full answer. However I’m convinced that XACML is a key element for authorization in the cloud, given that it is the standard for externalizing authorization decisions. But it might need some enhancements to optimally work for cloud security as well. It definitely will need improved security architectures for cloud services themselves to externalize authorization decisions and to rely on centrally managed policies. And it definitely needs some thinking about the overall security architecture for cloud services. So I’m looking forward to comments on this post – maybe I’ve missed something and everything is there; maybe this initiates some enhancements to standards. I don’t know but I’m really curious.


Data Protection Laws – Location or Information?

15.06.2011 by Martin Kuppinger

One of the intensively discussed issues in Cloud Computing is compliance with local data protection and privacy laws. The European laws, for instance, are sort of “location-dependent”. It is much easier to deal with PII (Personally Identifiable Information) within the borders of the EU than outside of that region. That is the reason why many large Cloud Providers build data centers within the EU to support their European customers.

The question which recently came to my mind is: Does it really make sense to focus on location? Shouldn’t we better focus on the information security itself? The target is to avoid abuse of PII and other sensitive information. The laws focus on processing, with a very broad definition of the term “processing”. Processing is allowed only if the providers are following specific rules. However: When we clearly define these rules, when we audit the providers, when we do certification – why should the location really matter?

You could argue that there are regions where you won’t expect the government to respect these rules. You could argue that there are countries like the US where some laws are contradictory to European laws. However, that all could be easily part of the rules defined in the law. There is no logical reason to do it by location. If you look at the way governments in some European countries act I wouldn’t say that location is the best choice for enforcing data protection.

From my perspective it would be a good idea to fundamentally re-think data protection laws and to define protection requirement levels for different types of PII and different types of processing. Then rules for the requirements external (cloud) providers have to fulfill can be defined – independent of the location. If one of these rules is contradictory to the local laws in the country the provider has its data center the result would be the same as today. But overall, we would end up with far more flexibility for the cloud.

However, IT reality is always far ahead of the laws. Thus we probably have to accept that it will take many years until the laws reflect the reality of today’s globally distributed, service-based IT.


SCIM – will SPML shortcomings be reinvented?

23.04.2011 by Martin Kuppinger

There is a new initiative driven by Google, salesforce.com, and Ping Identity called SCIM (Simple Cloud Identity Management). It claims to overcome the shortcomings of SPML (Simple Provisioning Markup Language), a standard being around for some 10 years. SPML has the target of being a standard for provisioning information between systems. It is supported by most provisioning and access governance tools, but only few target systems. SAP probably is the most important supporter.

Google, salesforce.com, and others in the cloud don’t support SPML. Thus, provisioning to these systems requires using proprietary APIs, if available at all – Google and salesforce.com provide such APIs, but not every cloud provider does. To overcome this, work on SCIM has started.

The first question however is: Why not use SPML? The main reason might be that SPML is XML-based, not focusing on REST which appears to be the somewhat more efficient (and especially, more accepted) way to implement standards for the cloud. Another might be that SPML is moving forward very slowly, if moving at all. There are many defencencies in SPML, no doubt about that. These start with the limited support by non-IAM-vendors. There are technical limitations as well, including performance issues in large scale deployments and limitations regarding what could be provisioned via SPML.

Nevertheless, I’d like to ask two questions:

  • Wouldn’t it be better to join forces of SPML and SCIM to build a SPML version 3.0 which supports REST as well?
  • If working on a new or improved standard, wouldn’t it make sense to address all relevant use cases? SPML doesn’t today and SCIM is not likely to do, when looking at the information provided today.

The first aspect seems to be more sort of a political issue between different vendors. However, having two standards doesn’t help anyone at the end of the day.

That’s even more true if both standards are too lightweight and don’t cover all the companies need today. When looking at the little piece of SCIM specification published it looks like SCIM will only touch the surface of what is required. The use cases are focused on providing user information to cloud services. However, the topic isn’t identity management, it is identity and access management. The access or entitlement part is the big thing to solve. Dealing with different APIs of different cloud providers for identities is an issue, but it isn’t the biggest one – several vendors (federation, classical on-premise provisioning, cloud provisioning) have addressed this at least for the leading cloud providers.

But what about controlling who is allowed to do what in these services? How to manage entitlements, e.g. group membership, authorization rules, and other things? XACML is a standard which supports this, but again there is little to no support by cloud providers for XACML – like with SPML. Thus, when starting to define a new standard, it shouldn’t be a too simple one, which SCIM appears to be at that point of time. It has one which covers all relevant use cases of identity and access management. There is only limited value in providing user information to a cloud service but still having to enter the proprietary web administration interface (or using some proprietary APIs) to control access for that user, to define groups, roles, policies, and so on.

My conclusion: There should be open standards for identity and access management in the cloud. Building on proprietary services is about repeating errors made before. But a new standard shouldn’t be too limited from the beginning. That, by the way, is one of the reasons I see behind the very limited success of SPML: It was too limited. I remember a conversation with one of the leading people involved in SPML years back from now where I suggested looking at use cases like supporting client lifecycle management solutions, e.g. tools supporting (amongst other features) software deployments. There are vendors today in the client lifecycle management market building custom integrations to HR or provisioning tools today, but not based on SPML – because they have never heard about SPML and because SPML never looked at this use case.

There might be a good reason for an effort like SCIM. But just being a REST-based standard but not really thinking beyond what SPML supported won’t solve the real world problems. Thus I strongly recommend to rethink SCIM and to look at significantly extended use cases.

If someone likes to discuss this with me in person, best is to meet at at EIC in Munich, May 10th to 13th.


Why you should focus on the infrastructure layer

21.04.2011 by Martin Kuppinger

In these days of slowly increasing maturity of Cloud Computing it becomes more and more obvious that and why IT depends on a well thought layer which I tend to simply call “infrastructure”. I have two simple pictures of IT in mind:

  • The somewhat classical model of platform, infrastructure, and software, like found in PaaS, IaaS, and SaaS in the common Cloud Computing meta models. It’s about hardware and other foundational components like operating systems, about the layer between to manage and orchestrate everything, and the applications themselves.
  • Another view consists as well of three layers. The services exposed to the users (i.e. in most cases the business) on top, the service production (either in the public cloud or a private cloud or in non-cloudified IT environments) at the bottom – and a layer in between which again is used for managing and orchestrating everything. Again, this layer might best be called “infrastructure”.

This layer is which connects everything. Thus, efficiency and effectivity of this layers are the foundation of efficiency and effectivity of the entire IT. Optimizing this layer allows to better connect the available services to the business demands. It allows to manage the different layers in the cloud.

When looking at that layer, there are some few key elements:

  • Service Management, e.g. the entire area of procurement, service request management, accounting, availability, performance, and whatever it requires to ensure that the services are delivered as expected
  • Information Security Management, including IAM (Identity and Access Management) and at least IT GRC (Governance, Risk Management, Compliance)
  • Application Infrastructures, e.g. middleware allowing to connect services, to enhance them if required and to do the orchestration

Did I miss important elements? OK, there is the classical IT security, however that’s part of Information Security – the reason we are looking at IT security is to protect information. You might add some other elements, however I tend to keep this model simple.

To me it appears to be more important to look at the dependencies of the three services. Information Security and Service Management have to work hand in hand, to ensure that access to services is restricted and controlled. Applications and Information Security are tightly related – think about how to build secure apps. And applications are, at the end of the day, nothing else than services which have to be managed.

I personally believe that starting with such a model and outlining the blueprint for your future IT definitely helps in separating the important from the less important things and to focus on building an IT ecosystem in your organization which is stable and works with whatever you plan to do in the Cloud.

See you at EIC 2011 in Munich, May 10th to 13th.


What I would expect from capacity MANAGEMENT

24.11.2010 by Martin Kuppinger

These days I talked with one vendor about his news around capacity management. Capacity management is used as a term to describe products which are used for optimizing workloads in data centers and to make the best use of the resources within these data centers. Honestly, I didn’t do a deep dive into that specific area before, and maybe I’m a green on advanced IT topics – but when vendors are promising capacity management, I’d expect that to be somewhat active. However, I’ve learned that it’s about reporting and analytics – even in association with terms like dynamic, continuous, proactive or whatever. I’ve checked several other offerings in the meantime and it appears to be common understanding amongst these vendors, that management ends with some nice reports in the space of capacity management.

I would have used the terms capacity reporting or capacity analytics instead, but I know that vendor marketing tends to “beautify” their stories a little.

So what are my expectations on capacity management? The issue is obvious – it is complex to manage the workloads in the increasingly virtualized and complex data centers. And it is becoming even more complex when we have different “clouds”, private and public, where we might put these workloads. Thus we need to support that we can optimize this – a dynamic or continuous capacity management is key. However, I’d at least expect to have a little more than just the information that there is something to be optimized. To achieve a fully automated management for sure will be a long journey, requiring a strong and valid policy model to ensure that nothing goes wrong. It will also require a tight integration with Business Service Management to ensure that the specifics of business processes like high workloads at the end of the month/year are met. It will even require that financial aspects (costs of internal and external cloud resources) and governance aspects (which data and workloads is allowed to be placed where due to governance/compliance/security constraints?) are met. In the meantime, automatically created “jobs” for the virtualization, server, and application management tools to easily make the proposed changes reality would be a first step.

I’m looking forward to see when capacity management becomes reality. There is definitely a value in today’s solutions. But the value could me significantly bigger than it is.


Secure Pipes or the new Role of Telcos

17.11.2010 by Martin Kuppinger

This week, I had a very interesting discussion with Werner Thalmeier, CTO of M86 Security, about the protection of systems and information. He used the analogy of the “secure pipe” to explain the approach they are following – today, our drinking water is clean, we can use it directly out of the pipe. In former days, it wasn’t. It had to be cooked, filtered, and so on. The approach of M86 Security is basically the same – keep the pipe clean so that you don’t have to care about what comes out.

We ended up in a discussion of new challenges in that area, especially the “apps” for the so called smart phones (I still think that this is the wrong term – they might be smart, but they aren’t made that much for doing phone calls. But that’s another story.). These apps are harder to secure given that there isn’t a browser anymore where HTML code could be analyzed and so on. That leads to the question: What is the pipe and where to filter? The obvious answer is: The only place to secure that type of IT (e.g. the technology) to protect the information (it’s about Information Security and the I in IT, Technology Security is just a means to achieve that goal) are the providers.

That’s where Telcos come into play. They are the ones which are connecting the app providers and the devices. ISPs (which are frequently Telcos as well) have to be added, for apps running on WLAN connected devices or classical notebooks and desktop PCs today or in the future. These are the ones who have control about pipes.

What would that mean in practice? Telcos will have to offer this as an value add service to their corporate customers. Corporate customers will have to look at the legal details and to prepare policies for the usage of corporate devices and corporate information, if not done yet. Telcos will then have to implement the technology to filter, best done in a way which allows the corporate customers to flexibly apply their internal policies and to integrate this with existing security tools. For Telcos, that could be one (of many) value add business. For corporate customers, it is about using Telcos as a MSSP (Managed Security Service Provider). Obviously, we have to rethink security in a perimeter-less world – and Telcos can play a vital role in the future information security business.


Cloud Computing is mainly Service Management

28.10.2010 by Martin Kuppinger

When looking at all the discussions around the “cloud” I still miss some focus on the real essentials of a strategic (!) approach for using clouds. Clouds are, when looking at the right now common understanding of private, hybrid, and public clouds, in fact nothing else than IT environments which produce IT services. These services are provided at many different layers, like in the common (and pretty coarse grain) segmentation into SaaS, PaaS, and IaaS. But: It is about the (efficient, scalable,…) production of standardized, reusable services.

Cloud Computing is about using these services. It is about procurement, management, orchestration, accounting, and so on. With other words: Cloud Computing is mainly about service management, in a standardized way. In a perfect world, all services of all products (internal and external) would be managed consistently. There could be one consistent accounting, ending up with something like an ERP for IT. However, the service management aspect of Cloud Computing appears not to be in the centre of most discussions around Cloud Computing. Many discussions are just about tactical comparisons and views of parts of Cloud Computing. Many discussions are around security. But about service management, the really strategic thing? The part which will fundamentally change the way we are doing IT?

For sure there is a lot of discussion around service management today. ITIL is a good example. However, that covers just a part of IT. We have to look at it from the highest layer (business and its requirements, described as real business services like “managing contracts of type … in compliance with regulations and…”) down to granular web services used in SOA architectures. Services are sort of everywhere. And the future of IT is about having two layers:

  • Service production (In the Clouds)
  • Service consumption (Cloud Computing)

That requires fundamental changes in IT organizations. The core competency is to become best in class in mapping business requirements to the required services, e.g. in doing the “cloud computing” part right. For the “production” part of IT, it is about becoming best in class in providing efficient services. But typical IT organizations will be split into two parts: Consumption/Orchestration/Management and so on – and production in the private cloud environment. Enabling this shift is the key issue for any organization today.

You might now argue “what about security?”. Pretty easy: Security is a part of this. Every service has a functional part and a “governance” part: Where is the service allowed to run due to compliance? What about encryption of transport and data? Who is allowed to access the service (or parts of it)? And so on… With other words: When you’ve solved the service management piece, you’ve automatically solved at least a large portion of the security piece. You might argue that there are some infrastructural aspects not covered by this (how to enforce what you need for service governance). But that could be understood as well as part of your service environment.

A lot of aspects around Clouds, Cloud Computing, Cloud and Services, Cloud Security and so on will be discussed at EIC 2011/Cloud 2011 in Munich, May 10th to 13th.


Services
© 2014 Martin Kuppinger, KuppingerCole