The Future of Corporate IT

06.05.2014 by Martin Kuppinger

When looking at today’s IT, it is driven by some major evolutions. Everything which is done in IT has to take these evolutions into account. One is Social Computing. The second evolution is Mobile Computing. The third evolution is Cloud Computing. All these trends affect IT fundamentally. The consumerization and deperimeterization of IT are logical consequences. Information technology (IT) is available to virtually everyone and virtually everywhere.

When looking at the Future of IT Organizations, Cloud Computing has the biggest impact. With the rise of Cloud Computing, IT managers and Business started to feel that the internal, on-premise IT needs to be able to compete against attractive external offerings. The IT Supply Chain is changing fundamentally. There are far more suppliers within reach. This evolution is neither new nor surprising. It is just that IT overall is moving from manufacturing to industrialization. For IT Organizations that means they either have to adapt to that new age of “industrialized IT” or they will fail.

diagram

The Future IT Paradigm by KuppingerCole, a standardized model for building your future IT, provides the guideline for organizations to move their IT Organization and IT Infrastructure to the next level and to make it future-proof. It helps in fulfilling the major business requirements:

  • Provide the services that business really needs – agile, just in-time, cost-effective, and in the way business really needs them
  • Enforce Information Security and protect the sensitive business information and intellectual property of the organization
  • Mitigate your IT risks, stay compliant, and enforce an enterprise-wide Governance approach

Looking at the Future IT Paradigm by KuppingerCole, it becomes obvious that the key of the new IT Organization is the segmentation of IT according to the layers defined in this model. But there is much more. It is also about creating new roles and responsibilities in the IT Organization. It helps IT Organizations in re-gaining leadership and making their on-premise IT production state-of-the-art again. The Future IT Paradigm by KuppingerCole consists of three layers – plus the Governance infrastructure and IT & Security Management.

Business Service Delivery focuses on providing exactly the services business needs, in the way business needs them, and on time. It is all about interfacing Business and IT. This is where Business/IT alignment moves from a buzzword towards reality.

Service & Information Management is what we also could name “Core IT”. This is where services are managed and where IT services are transformed into business services. This is also where IT Security is enforced. And it is the level where Information is managed.

IT Service Production is about producing services and providing them to the business. This layer supports all types of production environments, from on-premise to any type of clouds. These production units have to provide services in a standardized way. Best of all, they are themselves organized according to that three-layered structure, by understanding the output they provide as the business services for their customers, i.e. IT itself.

For a full view on that model, an in-depth description of what it means for the IT Organization and which structure, departments, and skills are required, have a look at the KuppingerCole Report #71,200 The Future of IT Organizations.


When are technologies really disruptive?

19.04.2013 by Martin Kuppinger

A few days ago I read an article about “disruptive technologies” in the (glossy) customer magazine of a large system integrator. The article mentioned technologies such as Big Data, Cloud Computing, or Mobile Computing. But are these technologies really disruptive?

The definition of “disruptive innovation” in Wikipedia is as follows:

A disruptive innovation is an innovation that helps create a new market and value network, and eventually goes on to disrupt an existing market and value network (over a few years or decades), displacing an earlier technology. The term is used in business and technology literature to describe innovations that improve a product or service in ways that the market does not expect, typically first by designing for a different set of consumers in the new market and later by lowering prices in the existing market.

Amongst the most prominent examples of disruptiveness are the replacement of sailing ships by steamboats in transportation or of horse-drawn carriages by automobiles. Sailing ships play virtually no role at all in transportation anymore, at least not in the western hemisphere and across long distances. The same is true for horse-drawn carriages.

When looking at the technologies mentioned at the beginning, Cloud Computing is the most disruptive one from my perspective. Cloud Computing has massive impact on licensing models, even for on-premise IT technology. However, is “Cloud Computing” really disruptive? Or are just some parts of Cloud Computing such as SaaS vs. on-premise software disruptive – and maybe even some areas therein such as Office applications?

And where is the disruptiveness in Big Data? I do not see technologies being replaced by Big Data. Big Data allows for new types of solutions, but it is not disruptive at all.

When looking at Mobile Computing, one might argue that the recent drops in sales of PCs is a clear indicator of disruptiveness. And yes, there is some likeliness that the classical PC market will shrink further. However, Mobile Computing appears to be too unspecific as the disruptive innovation. There are tablets, smartphones, phablets, etc. that are challenging the PC market. But do we really know how the future will look? I just recently switched back from the Apple iPad to more productive devices, i.e. a tablet PC (ultrabook), and, in addition, Microsoft Surface RT that at least comes with Office apps and is able to display a Word document or Powerpoint in correct formatting. And I have a classical PC under my desk, with three 27” displays attached – mail to the left, Word in the middle, browser and Skype to the right. The different trends within Mobile Computing are disruptive for traditional PC technology. But what will be the result? We just do not know yet.

Overall, not defining everything as disruptive might be helpful. Many things that appear to be disruptive during the hype turn out to be not that disruptive. The reality of most organizations will be hybrid environments for the foreseeable future – and not pure Cloud Computing. There will remain a significant need for on-premise IT, for desktop PCs, and for classical databases and BI (Business Intelligence). It is important to look at new opportunities, but doing it with some realistic distance helps – especially in IT.


The Future of IT Organizations – why IT needs a marketing department

16.05.2012 by Martin Kuppinger

Some weeks ago we published a report called “The Future of IT Organizations“. This report talks about how to restructure IT Organizations, following the basic structure we propose for IT in the KuppingerCole IT Paradigm. That paradigm is first described in the KuppingerCole Scenario “Understanding IT Service and Security Management”. From our perspective, IT organizations have to change fundamentally in order to redefine the way we do IT to better deal with challenges like Cloud Computing.

When looking at the future of IT, there is one area which I find particularly interesting. Some of this came to my mind when reading one of the blog posts of Chuck Hollis, Global Marketing CTO of EMC Corporation. The blog post is titled “Why IT Groups will invest in Marketing” and is focused on the need for marketing.

What I liked in that post was the distinction of inbound and outbound marketing for IT – a distinction I picked up and I have to recognize Chuck for. I then aligned it with the KuppingerCole IT model, adding another element which is “product management”.

The IT of the Future is demand-driven. Today’s IT should be as well but reality frequently shows a different picture. Providing the services business really needs is very much about that demand-driven IT. That requires understanding the customers. And that is where the topics of Outbound and Inbound Marketing come into play.

Outbound Marketing is the more common approach. We all are familiar with this in everyday life when getting confronted with advertisements and other types of market communication from vendors. For IT Organizations there are two main aspects for Outbound Marketing:

  • Positioning IT as the one and only source of the services business requires
  • Selling the IT services which are produced on-premise as part of these business services

The first part is of high importance because IT should remain in control (or get back control) of all the IT services which are either produced on-premise or procured from the Cloud. Without centralized control organizations will, over time, struggle massively with their IT services. Furthermore, there is no way to get a grip on IT cost without such centralized control

The other part of outbound marketing is mandatory as well. The ability to sell the services which are produced on-premise is important. On-premise IT is in competition with cloud services. Thus it is not only about producing the “better” IT services; it is also about selling them. IT Organizations have to change their attitude from being reactive to becoming a proactive provider of services to the business organization.

But there is the other side of the coin as well. That is about Inbound Marketing. Inbound Marketing is even more about the customer’s need – with the customer being the business part of your organization. Inbound Marketing is (amongst other things) about

  • The specific needs of your customer
  • Identifying the buyers on the customer side (which even in large organizations frequently is not as clear as it should be when it comes to budget discussions)
  • Understanding how the customer wants to consume

It is about understanding the customer and driving the IT Organization in a way that the right services are offered. In fact this is about a strategic and standardized approach to providing exactly the services business needs.

From an organizational perspective, IT has to fundamentally change its interaction with business. It is about bringing the demand-supply principle to life, which has been discussed for quite a while. The need to do that is greater than ever.

What do IT organizations need at that level?

  • They need to identify the “customer’s customers”, e.g. the persons within the business organization who are requesting the business services. That might require changes in the business organization as well, given that the business needs contact points. Notably, these persons might be less technical than today, given that the ideal of the future IT organization is to provide business services the way business needs them.
  • They need, as mentioned earlier, IT Marketing, i.e. persons caring for the outbound as well as the inbound marketing.
  • They need “product managers”. If you look at large and successful vendors, product management always plays an important role. They are the link between the customer and software development. They have to translate between customer requirements and development. Sort of the same role applies to them here: They work closely with IT Marketing and the customer’s customers on one side and the Service Management within the IT Service & Security Management Layer to map these.

Simply said: IT Organizations in their changing role as suppliers to the demand of business should act like successful software organizations – with the difference that they don’t need that level of sales but more the marketing and product management parts.


Posted in IT strategy | Comments Off

EIC 2012 – what I will talk about

11.04.2012 by Martin Kuppinger

Next week, EIC 2012 (European Identity and Cloud Conference) will take place in Munich. The conference will again grow significantly, and we will have a mass of interesting sessions there, ranging from keynote sessions to panels, best practices, and several workshops and roundtables. You definitely shouldn’t miss that conference.

I want to give a sneak peek at what I will talk about this year. The Opening Keynote on Tuesday, April 17th, 2012 will be about trends in IAM, GRC, Cloud Computing, and Mobile Security. I also will provide a quick view of the KuppingerCole IT Paradigm, which is one of the central themes provided by KuppingerCole at EIC 2012. We have defined that paradigm and the underlying model based on our experiences in research and advisory services to provide a consistent guideline for refining IT and to really become ready for the age of Cloud Computing, Mobile Computing, and Social Computing. This model is about how to provide the services business really wants while securing corporate information adequately. I think it helps a lot in adapting IT organizations to the changing requirements of business.

A little later, I will be part of an interview-style keynote session, which is about the privacy and information security challenges we are facing in 2012 and beyond. This definitely will become an interesting discussion, with Roy Adar of Cyber-Ark, Shirief Nosseir of CA Technologies, and Jim Taylor of NetIQ participating and Dr. Nigel Cameron of the Center for Policy and Emerging Technologies (C-PET) moderating the session.

The following day, I’ll start with a session that explains how the KuppingerCole IT Paradigm helps in increasing the value IT provides to the business. Following that presentation, we will have a panel discussion about how IAM can catalyze the secure enterprise. This panel will definitely become a highlight of EIC 2012, with some Ex-Burton analysts participating: Craig Burton, Gerry Gebel, and Mike Neuenschwander.

After that session, I’ll use the KuppingerCole IT Paradigm to describe what the future IT Organizations should look like – an IT Organization which is much closer to the business and which helps in dealing with changes such as Cloud Computing. There will be a new report describing this topic coming out right before EIC (and there are also new and updated reports on the KuppingerCole IT paradigm available).

Another very valuable report will be the one on “Personal Data – Life Management Platforms”. There will be a roundtable on that topic moderated by Doc Searls, of the Berkman Center for Internet and Society at Harvard University, and myself.

Another session will be about “One IT, One IAM” – this is a session going beyond IAM and linking Cloud, IAM, and the way we structure IT. This is about how to end up with one IT that serves all your needs instead of separate solutions for different types of Clouds and your on-premise IT.

Also pretty interesting is the “Re-engineering IAM” session. I have just written two reports, an update on my view of Access Governance Architectures and another one looking at whether, when, how, and where to migrate existing legacy Provisioning systems you might have.

In a joint session with Craig Burton we will link the KuppingerCole IT Model and the API Economy, a paradigm focusing on the increasing number of available APIs and their use.

Besides these sessions, I’m also involved in some others around virtualization and the security of Big Data. And there will be some other new reports out for EIC, written by several of the KuppingerCole analysts like Craig Burton, Fulup ar Foll, Prof. Dr. Sachar Paulus, Mike Small, Dave Kearns, and me.

So there’ll be a lot of interesting topics at EIC 2012. There will be for sure many more sessions on other topics and there will be virtually all relevant players in the exhibition area. So don’t miss EIC 2012.

You will find all information about EIC here: www.id-conf.com

All current and upcoming KuppingerCole research is available here: www.kuppingercole.com/reports


Posted in IT strategy | Comments Off

Agility, service levels, and cost

06.10.2011 by Martin Kuppinger

Some two weeks ago I’ve been at the EMC EMEA Analyst Summit in France. In one of the session Chuck Hollis, VP Global Marketing CTO of EMC Corporation (what a title, isn’t it?) made a very good comment when of the presenters talked about the needs for

  • agility and speed
  • service level fulfillment and improvement
  • cost optimization

of IT when providing services. He pointed out that IT looks at this typically in the order of cost – service level – agility, while business looks at agility – service level – cost. I really like that.

You might argue that business always is talking about IT being too expensive. Yes, they do. But there are reasons for that. On reason is that business still frequently doesn’t really has an answer on the “what’s in for me?” question. If business doesn’t see a value (and supporting the need for agility, e.g. enabling business to become better, is sort of the big theme behind the business value) it looks at costs. No surprise at all. However, if IT provides what business really wants, then the discussion is much less about cost.

With other words: IT has to understand what business really needs. Look at the business services they want, at the business value, and how IT supports agility and speed. Ensure the service levels. And then try to do it at optimized cost.

Honestly: That isn’t a groundbreaking insight. Many of us are talking about this since years. But do we act accordingly? Not always. Always having in mind that the order better should be agility – service level – cost than the other way round might help us to become better in Business/IT alignment.


Who are the good guys – the one that keep you informed about security issues or the others?

20.09.2011 by Martin Kuppinger

I understand the reason behind – but it is still contradictory. People expect IT vendors to quickly inform them about security issues. And people then blame them for the security issues. OK, if there are security issues which affect someone, he has some reason to blame the company responsible for these. Nevertheless, some more fairness would help in achieving even more openness. If you have to admit a security issue and you fix it, then this is obviously better than just trying to hide what has happened.

Let’s take some examples. Microsoft has been bashed for years for not doing even to secure its products. They have built a sophisticated system for patching and informing the public. They are very open regarding security weaknesses. But they are still blamed for being insecure. Apple is much more reluctant in its openness regarding security issues. But they aren’t blamed as much as Microsoft. Fair or unfair? I personally prefer the Microsoft approach – Microsoft has been amongst the first to provide a patch for the DigiNotar case. It took Apple much longer.

The DigiNotar case is my second example. Today the news of bankruptcy spread the news, after DigiNotar had to admit that their root CA (Certificate Authority) became hacked. The bad thing is that it looks like DigiNotar knew about that way before. They didn’t inform the public. Good or bad? I opt for bad – they severly increased the security risks in the entire Internet.

RSA Security is another example. They informed the public about the hack of the RSA SecurID seeds. They informed their customers. And they got blamed. I believe that the RSA approach is far better than the DigiNotar approach. Customers were informed and thus able to react. RSA spend a lot of money for helping customers to address their issues.

We can blame all, Microsoft, Apple, DigiNotar, RSA, and all the others not mentioned for security bugs. I remember a professor of informatics calculating back in the 1960′s that starting with a defined (relatively low) number of lines of code there is no chance to avoid bugs. Thus, security bugs in code and security weaknesses in IT environments are somewhat “natural”. And, by the way, it’s always a question of how much you invest in attacks to succeed. There is no absolute security. RSA did a lot to secure the seeds, knowing that they are the biggest risk (and every RSA SecurID customer could and should have known of that “single point of failure”). DigiNotar, from what I’ve heard, didn’t do as much. Microsoft has invested massively in improving security, but still is on a long-year journey for better code and so on.

At least, it is a difficult balance. Openness can’t be an excuse for security issues. But openness is better than fuzzing around or hiding security issues. Openness allows the customers to evaluate their risks and to act. And risks are better than uncertainty, which is the result of not being open around security issues. You can avoid risks – but it’s hard to deal with uncertainty.


Moving forward in IAM at your own pace

17.08.2011 by Martin Kuppinger

During the last years, there has been a lot of change in the Identity Provisioning market. Sun became part of Oracle, Novell is now NetIQ, BMC Control-SA is now at SailPoint, Völcker has been acquired by Quest, Siemens DirX ended up at Atos. These changes as well as other influencing factors like mergers & acquistions, failed projects, and so on lead to situations where customers start thinking about what to do next in IAM and around provisioning. Another factor is that sometimes provisioning solutions are implemented with focus on specific environments – SAP NetWeaver Identity Management for the SAP ecosystem, Microsoft FIM for the Active Directory world. Not that they only support this, but they might be just another provisioning system. In addition, especially in large organizations it is not uncommon that regional organizations start their own IAM projects. The result: There are many situations in which organizations think about what to do next in provisioning.

However, just moving from product A to product B is not the best approach. In most cases, the deployment of provisioning tools took quite a while. In many cases there have been lot of customizations been made. And even while there might be some uncertainty about the future of the one or other product (or, in some cases, the certainty that the product will be discontinued sometimes in the future), just migrating from one provisioning tool to another seems to be quite expensive for little added value.

From my perspective, it is important for organizations to move at their own pace. The approach to do that is to put a layer on top of provisioning systems. I’ve described several options in a research note (and some webinars) quite a while ago. The research note called “Access Governance Architectures” describes different approaches for layered architectures on top of provisioning products. I’ll write an update later this year but the current version illustrates the basic principle well. By adding a layer on top of provisioning, which might be Access Governance, a Portal/BPM layer, or IT Service Management (or a mix), organizations can deal with more than one provisioning tool. The architecture is more complex than just using one provisioning tool. But if you are not able to rely on one provisioning tool only, its at least an approach that works.

Organizations then can for example replace provisioning tools fully or partially. The latter is quite common if complex customizations have been made for selected target systems. Organizations can deal with multiple provisioning systems that “just appeared” for some reason - M+A, specific solutions for a specific part of the IT ecosystem, or whatever. And they can move forward more flexible than in a monolithic architecture. Yes, these approaches require some more architectural work at the beginning, but that pays off. It pays off by more flexible migrations, by avoiding migrations at all, by less “political” conflicts with some of the lobbies within IT. It even enables to change the integration layer without affecting the underlying provisioning systems. And for sure it allows to interface with target systems in a flexible way, not only using provisioning tools but service desks or other types of connectivity if required.

But, at the end, the most important thing is that it allows customers to move forward at their own pace. Thus, before you think about migrating away from your current provisioning tool, think about how you can save your investments and add value – by new functionality and by business-centric interfaces of Access Governance and the increased flexibility of your IAM environment.


Critical success factors for IAM projects

13.07.2011 by Martin Kuppinger

This is sort of a “back to the roots” post, but for some good reason. I’ve done several advisories and customer calls recently, and in some of them it became obviuos that companies tend to miss some of the critical success factors for IAM (Identity and Access Management). Some of the projects are still too technology-focused. So I’ve put together some key success factors for IAM projects. These are not that technical, so you won’t read things like “support the cloud”, because that should just be a result of the requirements analysis.

Requirements: Understand the requirements of Business and IT – of both! And look at what might become requirements soon, so the obvious trends (like Cloud Computing, like the increasing regulatory compliance pressure even in not-that-heavily regulated industries). Knowing the requirements helps in defining the right architecture and in slicing the big elephant of IAM into smaller pieces, e.g. projects you can handle successfully.

Architecture: IAM is more than only provisioning, even while provisioning still is an important element. But oeverall, architectures are increasingly modular, providing more flexibility, better integration with other pieces of IT, and the ability to serve new requirements quickly when needed. So, look at the architectural options you have today and don’t focus on the classical architectures only.

Context: IAM is one element of IT, and one piece of your Information Security framework. It has to interface with Service Management and with other Information Security technologies, as well as with the entire GRC (Governance, Risk Management, Compliance) stack. So don’t look at IAM without understanding how it fits into the big picture.

Policies, Processes, Roles: Does your organization have well-defined policies for IAM? Does it have well-defined processes? And how about business roles, defined by the business? If any of these elements is missing, important input for your IAM deployment is missing. The policies define what you have to do and what to do first, the processes are about your implementation of provisioning and Access Governance (and more) – not even to speak about roles. The good news is that businesses better understand the need for these and are more willing to actively work on these topics then some years before.

Team: For sure it is always about having the right people – the ones who understand technology, the ones who understand business, and the ones who connect both sides.

Service focus: Last but not least it is about having a service focus. IAM is one service IT provides, as part of Information Security. It has to be user-centric, focusing on the services the users (from business and IT) require. That includes integration points to your service management environment.

You might define other ones – but these are the ones I find most important from my experience.


The Sony case – or how to best ignore security best practices

04.05.2011 by Martin Kuppinger

The data theft at Sony has been in the headlines for some days now. What makes me most wonder is that – from what I’ve read and heard first – even the passwords were stored unencrypted. However, Sony claims to have used a hash to protect these passwords. It looks like Sony also has stored the credit card numbers plus the associated security codes (which are, by the way, one of the most ridiculous approaches to enhance security) together and, no surprise, unencrypted. But if Sony has used hash values: Why did everyone assume that these passwords become common knowledge (at least for the hackers and their “customers”)?

But let’s start with passwords: Even while it is still done frequently, it is anything but good practice to store passwords unencrypted. You not even need to store them encrypted. Just store a hash, apply the same mathematical algorithm to passwords entered and compare the hashes. Even while some of the algorithms in that area aren’t “bullet-proof” that is far better than storing millions of passwords unencrypted. Storing passwords unencrypted is such a fundamental error that you just can call that grossly negligent. That is not a simple fault but ignorance against fundamental security requirements – even more, when that information is associated with credit card information and other types of highly sensitive data like bank accounts. If Sony has stored hash values that would be good practice, depending a little on the algorithm used. That reduces the risk for the Sony customers even while there is still some risk of having the hash values being stolen. Passwords might be derived from these for example based on brute-force attacks.

Let’s look at the next point. Sony has become, from what we know, a victim of an external attack. Accessing large numbers of data most likely involves a SQL injection attack. Interestingly, the Sony Playstation website has been hit by such an attack before, some three years ago. Given that something happened before raises the question why Sony didn’t protect information better. Haven’t they heard about database security tools and especially database firewalls? That’s exactly the type of technology which helps you protecting data like (if you have them) hashed or unprotected passwords or credit card data. We recently had several webinars on database security and database governance, the last one yesterday about database firewalls specifically. All the recordings are available.

Overall it looks like this hasn’t been the most sophisticated hack ever. It looks like no internals were involved (which would lead to the topic of PxM, e.g. protection against privileged access/users). It looks like Sony just has ignored not even best or good practices, but in many areas even average practices in security.

The bad thing about this is, that Sony isn’t alone out there when it comes to ignoring good/best practices in security. The most common reason is that they just don’t think about security – either because it is too complex or because of the price to pay for security. Hopefully, the Sony case alerts some of the others to review their security and to improve it. However, there is a saying in German that hope dies at last. And I feel that this is more about hoping than about really expecting web sites to become more secure by design.

By the way: European Identity Conference, to be held next week in Munich, is about information security, IAM, GRC, and database security. A good place to learn more and to meet the analysts of KuppingerCole to discuss Information Security issues in person.


Why you should focus on the infrastructure layer

21.04.2011 by Martin Kuppinger

In these days of slowly increasing maturity of Cloud Computing it becomes more and more obvious that and why IT depends on a well thought layer which I tend to simply call “infrastructure”. I have two simple pictures of IT in mind:

  • The somewhat classical model of platform, infrastructure, and software, like found in PaaS, IaaS, and SaaS in the common Cloud Computing meta models. It’s about hardware and other foundational components like operating systems, about the layer between to manage and orchestrate everything, and the applications themselves.
  • Another view consists as well of three layers. The services exposed to the users (i.e. in most cases the business) on top, the service production (either in the public cloud or a private cloud or in non-cloudified IT environments) at the bottom – and a layer in between which again is used for managing and orchestrating everything. Again, this layer might best be called “infrastructure”.

This layer is which connects everything. Thus, efficiency and effectivity of this layers are the foundation of efficiency and effectivity of the entire IT. Optimizing this layer allows to better connect the available services to the business demands. It allows to manage the different layers in the cloud.

When looking at that layer, there are some few key elements:

  • Service Management, e.g. the entire area of procurement, service request management, accounting, availability, performance, and whatever it requires to ensure that the services are delivered as expected
  • Information Security Management, including IAM (Identity and Access Management) and at least IT GRC (Governance, Risk Management, Compliance)
  • Application Infrastructures, e.g. middleware allowing to connect services, to enhance them if required and to do the orchestration

Did I miss important elements? OK, there is the classical IT security, however that’s part of Information Security – the reason we are looking at IT security is to protect information. You might add some other elements, however I tend to keep this model simple.

To me it appears to be more important to look at the dependencies of the three services. Information Security and Service Management have to work hand in hand, to ensure that access to services is restricted and controlled. Applications and Information Security are tightly related – think about how to build secure apps. And applications are, at the end of the day, nothing else than services which have to be managed.

I personally believe that starting with such a model and outlining the blueprint for your future IT definitely helps in separating the important from the less important things and to focus on building an IT ecosystem in your organization which is stable and works with whatever you plan to do in the Cloud.

See you at EIC 2011 in Munich, May 10th to 13th.


Services
© 2014 Martin Kuppinger, KuppingerCole