Be prepared for BYOD

06.06.2011 by Martin Kuppinger

BYOD: Again one of these acronyms. It stands for “Bring Your Own Device”. You’d also say that it stands for IT departments accepting that they’ve lost against their users. They have lost the discussion about which devices shall be allowed in corporate environments. When I travel by train, I observe an impressive number of different devices being used. There are Windows notebooks, netbooks, iPads, iBooks, other types of “pads”, smartphones,…

For a long time corporate IT departments have tried to limit the number of devices to a small list, thus being able to manage and secure them. However, the reality especially in the world of mobile devices proves that most IT departments have failed. For sure many have restricted the access to corporate eMail to Blackberry devices. But many haven’t managed to achieve that target. And the popularity of Apple devices increases the heterogenity of devices being used by employees.

It increasingly looks like the solution only can be acceptance. Accept, that users want to use different types of devices. Accept that the innovation especially around smartphones and pads is far quicker than corporate IT departments can adopt their management tools.

At first glance that sounds like a nightmare for corporate IT departments. How to manage these devices? How to secure the devices? However, it is not about managing or securing the devices. That would be “technology security”. It is about managing and securing information, e.g. “information security”. It’s about the I in IT, not the T. Thus, we have to look at when to allow access to which information using which tool.

To do this, a simple matrix might be the starting point. The first column contains the classes of devices – notably not every single device. The first row contains the applications and information being used. In the cells you can define the requirements, based on the risk score of both the devices and the information. In some cases you might allow access based on secure browser connections, in others you might require to use virtual desktop connections. In others you might end up with having to build a specialized app. However, if banks are able to secure online banking on smartphones, why shouldn’t you be able to secure your corporate information on these devices?

You might argue that building apps or deploying desktop virtualization is quite expensive. However, trying to manage all these different devices or trying to restrict the devices allowed is expensive as well – and much more likely to fail. I don’t say that it is easy to protect your corporate information in a heterogeneous environment, supporting BYOD. But it is much more likely to be feasible than to manage and secure any single device – given the increasing number of these devices, the speed of innovation, and the simple fact that corporations don’t own all these devices.

Thus it is about preparing for BYOD by providing a set of secure paths to access corporate information and to protect that information – and by understanding how to protect which information where. When you start with BYOD, do it risk-based.


Virtualization vs. Security

27.01.2011 by Martin Kuppinger

Some days ago, a vendor talked at an analyst meeting about the relationship between virtualization and security. The argument was: At the hypervisor you can combine network security management, server security management and some other aspects of security management – I can’t remember everything. Thus virtualization increases security, because you have one point of control.

Right – as long as you can control what administrators and operators are doing. Unfortunately, that’s not the case in typical virtualization environments. There is no PxM (Privileged Access, Account, Identity, User) Management at all. And in that case, combining everything is a problem, a nightmare from a compliance point-of-view. For sure there is a value in having a single point-of-control, but only if you are able to adequatly control use of this.

I’ve asked the speaker about the solutions around PxM offered by that vendor – there weren’t any.

Without specific virtualization security solutions, PxM being one very important amongst them, there is a virtualization security risk. There is a potential of increasing security by using adequate technology, which is provided by several vendors. But claiming that there is a value of combining a lot of highly elevated administrative actions without being able to manage them doesn’t make any sense.

For a comprehensive overview on what customers expect around virtualization security just have a look at that survey.

And don’t forget to register for EIC 2011 and Cloud 2011.


Creating new attack surfaces in VMs and Network Security devices

09.12.2010 by Martin Kuppinger

There is a good reason to add functionality to specific types of devices, especially in the network. Doing security at the edge can be highly efficient. Thus, implementing for example PEPs (Policy Enforcement Points) for access management into network access gateways is, from the perspective of efficiency, a pretty good idea. And when looking at what the network vendors like Cisco, F5 Networks, and all the others are doing, the number of add-ons which can be added to these devices and run locally has increased significantly.

Basically the same, still at a lower level, could be observed around VMs. Hypervisors tend to become more capable of doing things. And especially when looking to client-side hypervisors, there is a slight tendency to add more and more features to themĀ - starting with AV done centrally for many machines and probably ending with supporting the standard user interface at some point of time in the future.

However, as well network devices as hypervisors aren’t really secure by design. If we look at how many specific tools are out there to better protect these devices or software layers and if we look at the risks around privileged accounts especially for network equipment and VMs, it becomes obvious that there is a gap between what these devices or hypervisors can do and how they are protected themselves. Every new feature also provides sort of a new attack surface – in an environment which isn’t the dream of a security guy (maybe it’s the dream of an attacker, but that’s what we want to avoid).

The best would be to make these devices and software layers secure by design. Granular access control, centralized policy management based on XACML, tightly integrated with the provisioning and PxM (Privileged “whatever – user, identity, access, account” Management), standard auditing interfaces which allow integration across devices from different vendors without heavy integration work at the (still too technical) SIEM layer, and so on.

However, that will take some time. In the meantime there are two things you can do: Balance the values and the risks – can you afford to pay the price in security for better efficiency? And protect these devices consistently by management tools, PxM with support for these devices, maybe together with SSO, and auditing and analysis mechanisms.


What I would expect from capacity MANAGEMENT

24.11.2010 by Martin Kuppinger

These days I talked with one vendor about his news around capacity management. Capacity management is used as a term to describe products which are used for optimizing workloads in data centers and to make the best use of the resources within these data centers. Honestly, I didn’t do a deep dive into that specific area before, and maybe I’m a green on advanced IT topics – but when vendors are promising capacity management, I’d expect that to be somewhat active. However, I’ve learned that it’s about reporting and analytics – even in association with terms like dynamic, continuous, proactive or whatever. I’ve checked several other offerings in the meantime and it appears to be common understanding amongst these vendors, that management ends with some nice reports in the space of capacity management.

I would have used the terms capacity reporting or capacity analytics instead, but I know that vendor marketing tends to “beautify” their stories a little.

So what are my expectations on capacity management? The issue is obvious – it is complex to manage the workloads in the increasingly virtualized and complex data centers. And it is becoming even more complex when we have different “clouds”, private and public, where we might put these workloads. Thus we need to support that we can optimize this – a dynamic or continuous capacity management is key. However, I’d at least expect to have a little more than just the information that there is something to be optimized. To achieve a fully automated management for sure will be a long journey, requiring a strong and valid policy model to ensure that nothing goes wrong. It will also require a tight integration with Business Service Management to ensure that the specifics of business processes like high workloads at the end of the month/year are met. It will even require that financial aspects (costs of internal and external cloud resources) and governance aspects (which data and workloads is allowed to be placed where due to governance/compliance/security constraints?) are met. In the meantime, automatically created “jobs” for the virtualization, server, and application management tools to easily make the proposed changes reality would be a first step.

I’m looking forward to see when capacity management becomes reality. There is definitely a value in today’s solutions. But the value could me significantly bigger than it is.


VDIs – more than a deployment option?

25.06.2010 by Martin Kuppinger

Virtual Desktop Infrastructures (VDIs) are hype. But are they really a strategic element of IT? Or are they just a deployment option? I think that the answer is influenced by two major aspects:

  • Time and the maturity of Desktop Virtualization
  • The functional breadth of VDIs

With respect to the first aspect, VDIs today are more sort of a more expensive, more complex alternative to Terminal Services. Less users per server, the same (sometimes a little bit more advanced) protocol for remote desktop access, very limited capabilities to run the VMs locally on a hypervisor – VDIs aren’t really mature yet. However that will change. We will see more deployment options, improved management capabilities, some improvements regarding performance (however, VDIs will always be expensive in terms of compute power at the server), and so on. And especially with different local deployment options (streamed, synchronized), the need for remote desktop protocols will disappear, mobile users will be fully supported and less servers will be required – without giving up advantages like the (relative) independence from hardware and some centralized management aspects (which are, however, not that different from other deployment approaches).

The other aspect is about management. Is isn’t sufficient to integrate the management of server and desktop virtualization – and even adding storage virtualization management to that is not enough. Application virtualization has to be integrated as well. But even then we have some lack of capabilities:

  • There will most likely be other types of desktops for a pretty long time – the more specialized ones for “power users” and “knowledge workers”, for specific user groups like engineers or stock brokers, and so on. It is not only about the 50% or 80% of desktops which fall into few standardized categories. The main issue are always the remaining 20% or 50% of not-that-standardized desktops. And they have to be managed centrally as well.
  • That requires configuration management and software deployment beyond building few standard images. Image management in reality is far more complex than just having few standard images. And not every application can be virtualized. Beyond that, we need several other elements which typically are found in Client Lifecycle Management today: Think about inventories and License Management. With other words: You will either need Client Lifecycle Management (CLI) or VDIs have to fully integrate that in the future.

In the future, a more complete VDI stack with full CLI support and optimized support for local deployments and mobile users might become the standard – even for older operating systems and non-Windows platforms. For the meantime, it is probably the better strategy to understand VDIs as one deployment option amongst other and to integrate all these deployment options under centralized management system. At least it is a good idea to be realistic about VDIs and not too enthusiastic.

So I’m a believer in VDIs – but I’m a sceptic regarding their short-term value for most use cases. What is your opinion on this?


Reducing lock-in risks – Salesforce.com has understood

11.06.2010 by Martin Kuppinger

One of the really interesting announcements in the Cloud space these days has been from VMware and Salesforce.com with their vmforce offering. Their claim is “The trusted cloud for enterprise Java developers”. Correct. It is a cloud environment where Java developers can build apps with a Spring Eclipse-based IDE, where they can use Tomcat, and so on. Thus there is an environment do build and deploy Java apps in the cloud.

Beyond that, force.com functionality might be used. That is definitely interesting because force.com provides a lot of services around business analytics, reporting, mobile device support, and many other functional areas. That might speed up development significantly – sort of rapid development support in that environment.

However, the most important point from my perspective is that vmforce is much more open than force.com itself. The force.com platform is proprietary – and that equals to lock-in risks. Thus users have to analyze whether the advantages of rapid development, the force.com database, the force.com services and so on are worth the lock-in in the sense of very limited portability.

When choosing vmforce, developers can build Java apps in a standard environment. Thus, they can avoid these lock-in risks. If they opt to use force.com services, they have to pay a price in the sense of using specific services from a specific vendor. However, with a good software architecture the apps can be built in a way that allows replacement of force.com-specific features by other services.

With the combination of force.com and vmforce, Salesforce offers choice to developers – from a more closed, very rapid and efficient environment to a very open, but a little more complex environment plus the option to combine that in a flexible manner. That makes sense, from my perspective. And it is definitely worth to have a look at vmforce and to play around once they will provide their preview versions this fall. That is, by the way, a negative point: We are still some time away from production use of vmforce.


Services
© 2014 Martin Kuppinger, KuppingerCole