25.06.2010 by Martin Kuppinger
Virtual Desktop Infrastructures (VDIs) are hype. But are they really a strategic element of IT? Or are they just a deployment option? I think that the answer is influenced by two major aspects:
- Time and the maturity of Desktop Virtualization
- The functional breadth of VDIs
With respect to the first aspect, VDIs today are more sort of a more expensive, more complex alternative to Terminal Services. Less users per server, the same (sometimes a little bit more advanced) protocol for remote desktop access, very limited capabilities to run the VMs locally on a hypervisor – VDIs aren’t really mature yet. However that will change. We will see more deployment options, improved management capabilities, some improvements regarding performance (however, VDIs will always be expensive in terms of compute power at the server), and so on. And especially with different local deployment options (streamed, synchronized), the need for remote desktop protocols will disappear, mobile users will be fully supported and less servers will be required – without giving up advantages like the (relative) independence from hardware and some centralized management aspects (which are, however, not that different from other deployment approaches).
The other aspect is about management. Is isn’t sufficient to integrate the management of server and desktop virtualization – and even adding storage virtualization management to that is not enough. Application virtualization has to be integrated as well. But even then we have some lack of capabilities:
- There will most likely be other types of desktops for a pretty long time – the more specialized ones for “power users” and “knowledge workers”, for specific user groups like engineers or stock brokers, and so on. It is not only about the 50% or 80% of desktops which fall into few standardized categories. The main issue are always the remaining 20% or 50% of not-that-standardized desktops. And they have to be managed centrally as well.
- That requires configuration management and software deployment beyond building few standard images. Image management in reality is far more complex than just having few standard images. And not every application can be virtualized. Beyond that, we need several other elements which typically are found in Client Lifecycle Management today: Think about inventories and License Management. With other words: You will either need Client Lifecycle Management (CLI) or VDIs have to fully integrate that in the future.
In the future, a more complete VDI stack with full CLI support and optimized support for local deployments and mobile users might become the standard – even for older operating systems and non-Windows platforms. For the meantime, it is probably the better strategy to understand VDIs as one deployment option amongst other and to integrate all these deployment options under centralized management system. At least it is a good idea to be realistic about VDIs and not too enthusiastic.
So I’m a believer in VDIs – but I’m a sceptic regarding their short-term value for most use cases. What is your opinion on this?
20.06.2010 by Martin Kuppinger
LDAP (Lightweight Directory Access Protocol) is well established. It is the foundation for today’s Directory Services, which support LDAP as a protocol and which usually build their data structure on the associated LDAP schema. There are many interfaces for developers to use LDAP, from the LDAP C API to high-level interfaces for many programming environments.
Even while LDAP is well established, it is somewhat limited. There are several restrictions – two important ones are:
- The structure of LDAP is (more or less) hierarchical. There is one basic structure for containers – and linking leaf objects (think about the association of users and groups) is somewhat limited. That structure is a heritage of X.500, from which LDAP is derived – with LDAP originally being the lightweight version of the DAP (Directory Access Protocol) protocol. X.500 was constructed by telcos for telcos, e.g. with respect to their specific needs of structuring information. However anyone who ever has thought about structuring Novell’s eDirectory or Microsoft’s Active Directory knows that there is frequently more than one hierarchy, for example the location and the organizational structure. The strict hierarchy of LDAP is an inhibitor for several use cases.
- LDAP is still focused on the specific, single directory. It doesn’t address the need of storing parts of the information in fundamentally different stores. But the same piece of information might be found locally on a notebook, in a network directory like Active Directory, in a corporate directory and so on. How to deal with that? How to use the same information across multiple systems, exchange it, associate usage policies, and so on? That is out-of-scope for LDAP.
I could extend the list – but it is not about the limitations of LDAP. LDAP has done a great job for years but there is obviously the need to do the next big step. An interesting foundation for that next big step comes from Kim Cameron, Chief Identity Architect at Microsoft. He has developed a schema which he calls system.identity. There hasn’t been much noise around before. There is a stream from last years Microsoft PDC, there is little information at the MSDN plus a blog post, there is the Keynote from this year’s European Identity Conference. But it is worth to have a look at that. The approach of system.identity is to define a flexible schema for identity-related information which can cover everything – from local devices to enterprise- and internet-style directories, from internal users to customers and device identities, including all the policies. It is, from my perspective, a very good start for the evolution (compatibility to LDAP is covered) well beyond LDAP and today’s directories.
I’ve put the concept under a stress test in a customer workshop these days. The customer is thinking about a corporate directory. Most people there are not directory guys, but enterprise IT architects. And they definitely liked the path system.identity is showing. It covers their needs much better than the LDAP schema. That proved to me that system.identity is not only for the geeks like me but obviously for the real world. Thus: Have a look at it and start thinking beyond LDAP. The concept of system.identity, despite being early stage, is a very good place to start.
11.06.2010 by Martin Kuppinger
One of the really interesting announcements in the Cloud space these days has been from VMware and Salesforce.com with their vmforce offering. Their claim is “The trusted cloud for enterprise Java developers”. Correct. It is a cloud environment where Java developers can build apps with a Spring Eclipse-based IDE, where they can use Tomcat, and so on. Thus there is an environment do build and deploy Java apps in the cloud.
Beyond that, force.com functionality might be used. That is definitely interesting because force.com provides a lot of services around business analytics, reporting, mobile device support, and many other functional areas. That might speed up development significantly – sort of rapid development support in that environment.
However, the most important point from my perspective is that vmforce is much more open than force.com itself. The force.com platform is proprietary – and that equals to lock-in risks. Thus users have to analyze whether the advantages of rapid development, the force.com database, the force.com services and so on are worth the lock-in in the sense of very limited portability.
When choosing vmforce, developers can build Java apps in a standard environment. Thus, they can avoid these lock-in risks. If they opt to use force.com services, they have to pay a price in the sense of using specific services from a specific vendor. However, with a good software architecture the apps can be built in a way that allows replacement of force.com-specific features by other services.
With the combination of force.com and vmforce, Salesforce offers choice to developers – from a more closed, very rapid and efficient environment to a very open, but a little more complex environment plus the option to combine that in a flexible manner. That makes sense, from my perspective. And it is definitely worth to have a look at vmforce and to play around once they will provide their preview versions this fall. That is, by the way, a negative point: We are still some time away from production use of vmforce.