Agility, service levels, and cost

06.10.2011 by Martin Kuppinger

Some two weeks ago I’ve been at the EMC EMEA Analyst Summit in France. In one of the session Chuck Hollis, VP Global Marketing CTO of EMC Corporation (what a title, isn’t it?) made a very good comment when of the presenters talked about the needs for

  • agility and speed
  • service level fulfillment and improvement
  • cost optimization

of IT when providing services. He pointed out that IT looks at this typically in the order of cost – service level – agility, while business looks at agility – service level – cost. I really like that.

You might argue that business always is talking about IT being too expensive. Yes, they do. But there are reasons for that. On reason is that business still frequently doesn’t really has an answer on the “what’s in for me?” question. If business doesn’t see a value (and supporting the need for agility, e.g. enabling business to become better, is sort of the big theme behind the business value) it looks at costs. No surprise at all. However, if IT provides what business really wants, then the discussion is much less about cost.

With other words: IT has to understand what business really needs. Look at the business services they want, at the business value, and how IT supports agility and speed. Ensure the service levels. And then try to do it at optimized cost.

Honestly: That isn’t a groundbreaking insight. Many of us are talking about this since years. But do we act accordingly? Not always. Always having in mind that the order better should be agility – service level – cost than the other way round might help us to become better in Business/IT alignment.


Who are the good guys – the one that keep you informed about security issues or the others?

20.09.2011 by Martin Kuppinger

I understand the reason behind – but it is still contradictory. People expect IT vendors to quickly inform them about security issues. And people then blame them for the security issues. OK, if there are security issues which affect someone, he has some reason to blame the company responsible for these. Nevertheless, some more fairness would help in achieving even more openness. If you have to admit a security issue and you fix it, then this is obviously better than just trying to hide what has happened.

Let’s take some examples. Microsoft has been bashed for years for not doing even to secure its products. They have built a sophisticated system for patching and informing the public. They are very open regarding security weaknesses. But they are still blamed for being insecure. Apple is much more reluctant in its openness regarding security issues. But they aren’t blamed as much as Microsoft. Fair or unfair? I personally prefer the Microsoft approach – Microsoft has been amongst the first to provide a patch for the DigiNotar case. It took Apple much longer.

The DigiNotar case is my second example. Today the news of bankruptcy spread the news, after DigiNotar had to admit that their root CA (Certificate Authority) became hacked. The bad thing is that it looks like DigiNotar knew about that way before. They didn’t inform the public. Good or bad? I opt for bad – they severly increased the security risks in the entire Internet.

RSA Security is another example. They informed the public about the hack of the RSA SecurID seeds. They informed their customers. And they got blamed. I believe that the RSA approach is far better than the DigiNotar approach. Customers were informed and thus able to react. RSA spend a lot of money for helping customers to address their issues.

We can blame all, Microsoft, Apple, DigiNotar, RSA, and all the others not mentioned for security bugs. I remember a professor of informatics calculating back in the 1960′s that starting with a defined (relatively low) number of lines of code there is no chance to avoid bugs. Thus, security bugs in code and security weaknesses in IT environments are somewhat “natural”. And, by the way, it’s always a question of how much you invest in attacks to succeed. There is no absolute security. RSA did a lot to secure the seeds, knowing that they are the biggest risk (and every RSA SecurID customer could and should have known of that “single point of failure”). DigiNotar, from what I’ve heard, didn’t do as much. Microsoft has invested massively in improving security, but still is on a long-year journey for better code and so on.

At least, it is a difficult balance. Openness can’t be an excuse for security issues. But openness is better than fuzzing around or hiding security issues. Openness allows the customers to evaluate their risks and to act. And risks are better than uncertainty, which is the result of not being open around security issues. You can avoid risks – but it’s hard to deal with uncertainty.


Be prepared for BYOD

06.06.2011 by Martin Kuppinger

BYOD: Again one of these acronyms. It stands for “Bring Your Own Device”. You’d also say that it stands for IT departments accepting that they’ve lost against their users. They have lost the discussion about which devices shall be allowed in corporate environments. When I travel by train, I observe an impressive number of different devices being used. There are Windows notebooks, netbooks, iPads, iBooks, other types of “pads”, smartphones,…

For a long time corporate IT departments have tried to limit the number of devices to a small list, thus being able to manage and secure them. However, the reality especially in the world of mobile devices proves that most IT departments have failed. For sure many have restricted the access to corporate eMail to Blackberry devices. But many haven’t managed to achieve that target. And the popularity of Apple devices increases the heterogenity of devices being used by employees.

It increasingly looks like the solution only can be acceptance. Accept, that users want to use different types of devices. Accept that the innovation especially around smartphones and pads is far quicker than corporate IT departments can adopt their management tools.

At first glance that sounds like a nightmare for corporate IT departments. How to manage these devices? How to secure the devices? However, it is not about managing or securing the devices. That would be “technology security”. It is about managing and securing information, e.g. “information security”. It’s about the I in IT, not the T. Thus, we have to look at when to allow access to which information using which tool.

To do this, a simple matrix might be the starting point. The first column contains the classes of devices – notably not every single device. The first row contains the applications and information being used. In the cells you can define the requirements, based on the risk score of both the devices and the information. In some cases you might allow access based on secure browser connections, in others you might require to use virtual desktop connections. In others you might end up with having to build a specialized app. However, if banks are able to secure online banking on smartphones, why shouldn’t you be able to secure your corporate information on these devices?

You might argue that building apps or deploying desktop virtualization is quite expensive. However, trying to manage all these different devices or trying to restrict the devices allowed is expensive as well – and much more likely to fail. I don’t say that it is easy to protect your corporate information in a heterogeneous environment, supporting BYOD. But it is much more likely to be feasible than to manage and secure any single device – given the increasing number of these devices, the speed of innovation, and the simple fact that corporations don’t own all these devices.

Thus it is about preparing for BYOD by providing a set of secure paths to access corporate information and to protect that information – and by understanding how to protect which information where. When you start with BYOD, do it risk-based.


Why you should focus on the infrastructure layer

21.04.2011 by Martin Kuppinger

In these days of slowly increasing maturity of Cloud Computing it becomes more and more obvious that and why IT depends on a well thought layer which I tend to simply call “infrastructure”. I have two simple pictures of IT in mind:

  • The somewhat classical model of platform, infrastructure, and software, like found in PaaS, IaaS, and SaaS in the common Cloud Computing meta models. It’s about hardware and other foundational components like operating systems, about the layer between to manage and orchestrate everything, and the applications themselves.
  • Another view consists as well of three layers. The services exposed to the users (i.e. in most cases the business) on top, the service production (either in the public cloud or a private cloud or in non-cloudified IT environments) at the bottom – and a layer in between which again is used for managing and orchestrating everything. Again, this layer might best be called “infrastructure”.

This layer is which connects everything. Thus, efficiency and effectivity of this layers are the foundation of efficiency and effectivity of the entire IT. Optimizing this layer allows to better connect the available services to the business demands. It allows to manage the different layers in the cloud.

When looking at that layer, there are some few key elements:

  • Service Management, e.g. the entire area of procurement, service request management, accounting, availability, performance, and whatever it requires to ensure that the services are delivered as expected
  • Information Security Management, including IAM (Identity and Access Management) and at least IT GRC (Governance, Risk Management, Compliance)
  • Application Infrastructures, e.g. middleware allowing to connect services, to enhance them if required and to do the orchestration

Did I miss important elements? OK, there is the classical IT security, however that’s part of Information Security – the reason we are looking at IT security is to protect information. You might add some other elements, however I tend to keep this model simple.

To me it appears to be more important to look at the dependencies of the three services. Information Security and Service Management have to work hand in hand, to ensure that access to services is restricted and controlled. Applications and Information Security are tightly related – think about how to build secure apps. And applications are, at the end of the day, nothing else than services which have to be managed.

I personally believe that starting with such a model and outlining the blueprint for your future IT definitely helps in separating the important from the less important things and to focus on building an IT ecosystem in your organization which is stable and works with whatever you plan to do in the Cloud.

See you at EIC 2011 in Munich, May 10th to 13th.


Should you learn about fraud from your customers?

06.04.2011 by Martin Kuppinger

Today I stumbled about an interesting survey. The core result: More than three-quarters of financial institutions learn of fraud incidents when notified by their own customers. The quote I like most is: “In other words, despite the availability today of world-class fraud detection technology, despite broad awareness of the current fraud threats and incidents – nothing spreads faster than word of a breach”. Fascinating, isn’t it!? However, it is really somewhat irritating.

There is some reason for financial institutions not to invest as much as they could and should in security. Security comes at a cost and financial institutions still balance these costs against the fraud-related losses. I doubt that this equation really works out as expected, but I had this discussion more than once – frequently with CIOs and CISOs which don’t have the budgets they’d like to have around security.

However, taking some risk is a valid approach. Given that there never ever will be the perfect security, a 100% security, everyone has to balance the cost of security and the (potential) cost of incidents happening. That’s the same approach everyone uses in daily life when deciding about insurances. The fundamental problem in that area is that risks tend to be rated too low whilst costs are seen much more realistic. That’s especially true when it comes to severe issues which might affect the net cash inflow, because that heavily affects the business. However, such risks are frequently ignored or missed when looking at IT security in financial institutions, leading to an underestimated risk and thus a lack of willingness to invest in security.

Another problem is the frequent lack of a holistic security strategy. Attacks at the operating system layer are still possible even when security at the application layer is good – and so on… Investing in point solutions might give the feeling of security, but it seldomly leads to real security.

However, all this doesn’t explain why financial institutions not even are aware of incidents in some many situations. Even when someone takes a risk, he should have controls in place which provide the fraud information. Not doing this is just inacceptable because it moves the things from risk to uncertainty – and thus is against the governance requirements the management has to fulfill. Not knowing about fraud is a clear indicator for an insufficient risk management, because risks are just ignored.

From my perspective, financial institutions have to act in that area by looking at all risks and by acting appropriate – by at least knowing, but better mitigating these risks.

EIC 2011 will have several sessions around security for financial institutions and there will be a lot of experts from the finance industry attending – thus it’s a perfect place to meet with peers and to discuss.


RSA SecurID again

23.03.2011 by Martin Kuppinger

I’ve blogged last week about the RSA SecurID case. In the meantime there were several other posts and advices on that and I’d like to put together some thoughts from my side about that, looking at what customers should do now.

What should existing customers do short-term?

In most cases, RSA SecurID will be a standard mechanism for strong authentication which can’t be replaced immediately. If customers don’t use a solution for versatile authentication they usually aren’t able to opt for another (stronger) authentication mechanisms on the fly. Not using RSA SecurID however will make things even worse, because that would mean to step back to one factor with one or two means for authentication. Thus it is about staying with RSA SecurID and deciding about which additional actions to take – “compensatory controls”, e.g. increased auditing, additional fraud detection technologies, and so on.

Customers who have a versatile authentication approach in place might evaluate whether they can replace RSA SecurID with another factor – which then would be, for time and logistics reasons, an approach not depending on hardware. However doing that will be somewhat complex (helpdesk calls, technical aspects,…). Thus customers should first check whether the increased risk of using RSA SecurID is acceptable or not. Instead of replacing the option of adding another factor/means for interactions and transactions with high risk appears to be most appropriate. Besides this, the actions mentioned abovr in auditing have to be implemented.

What should existing customers do mid-term?

Replacing a technology like RSA SecurID is quite expensive. Given that RSA will harden its own systems and seeds can be changed over time, the threat will decrease. However, as mentioned in my last post, RSA SecurID never will be the same again. The mid-term answer, from my perspective, is versatility. Having more options for quickly changing to other and additional factors and means for authentication is the most promising approach. Thus, RSA SecurID is just one of multiple approaches.

For high risk environments, biometrics might come into play again (if not used yet). In addition there are some approaches of two-factor authentication which don’t rely on seeds and secrete algorithms. However they aren’t necessarily absolutely secure (if anything could be absolutely secure), thus customers should carefully evaluate whether other approaches provide real advantages above the established RSA SecurID approach. The same level of mistrust should be used for all types of authentication.

What should potential buyers do?

It is about re-evaluating the strategy for authentication. Versatility is key – and the strategies need to be re-thought if they are not focused on a versatile approach allowing different types of authentication mechanisms to be used and exchanged flexibly. Regarding RSA SecurID, the risk has to be rated again and decisions about whether the approach is sufficient for the interactions and transactions which have to protected have to be reviewed. From my perspective it is not that much about not using RSA SecurID (depending on what RSA does to increase security again, for sure – but I assume they will do a lot) but to carefully analyze the level of protection provided and weigh this against the risks of authentication fraud for what has to be protected. When deciding to use RSA SecurID appropriate controls have to be implemented – but that is true for any other authentication mechanism as well.

By the way: Regardless of the RSA SecurID approach, any authentication strategy which doesn’t focus on versatility, risk-based authentication/authorization and context-based authentícation/authorization should be re-thought.

Some general thoughts:

RSA has had a very strong image for their RSA SecurID approach – and it worked for many years. However there are two fundamental issues:

  • Centralized seeds
  • Confidential algorithm

Both are risks of that mechanism. Thus security is obviously limited. Regardless of which approach you use, thinking about the potential weaknesses (social phishing; central stores which might become target of attackers;…) is important. Unfortunately, security comes at a price, because there aren’t simple, cheap, easy-to-use approaches without logistics cost and other shortcomings which provide perfect security.

Again, like mentioned in my last post, we will discuss things like versatile authentication and the RSA SecurID incident at the EIC 2011. You shouldn’t miss that event.


RSA SecurID – it will never be the same again

18.03.2011 by Martin Kuppinger

Yesterday RSA informed about a security breach which they assume to be an “advanced persistent threat”, e.g. a long-running attack. In that case it was apparently against the seeds, e.g. the base material for keys which are used to generate OTPs. With other words: The first and (until now) stronger part of the two-factor authentication used with RSA SecurID cards isn’t secure anymore. In fact it shows that every approach which relies on a central database of such seeds has its inherent security weaknesses.

The importance of this breach becomes obvious when looking at the SEC 8-K filing – it is important enough that it required an official note to the stock markets.

The bad thing in that case is that its not a single RSA SecurID card which is compromised, which always would be the case with stolen or lost cards – it are potentially all RSA SecurID cards, because currently it is neither known which seeds are compromised nor by whom. RSA has (under the link above) provided hints for its customers on what to do now. They claim that there is no immediate impact for existing customers. But when looking at the actions recommended by RSA it becomes obvious that customers can’t do much – besides fundamental changes, e.g. RSA SecurID implementations based on new seeds (with all the consequences of such a change) or changes to other/additional authentication mechanisms.

Our recommendation for customers which have RSA SecurID cards implemented is to first carefully analyze the situation and their specific risks – e.g. which type of information is at risk if the RSA SecurID based authentication is not only at risk (like now) but an attack actually takes place? Based on that it is about to decide whether immediate actions have to take place (besides informing the users to be even more careful regarding phishing mails and so on, because the authentication strength no relies mainly on the PIN). These actions might range from increased threat analysis and forensics to adding other authentication technologies.

Beyond that, customers should take the opportunity and rethink their authentication strategies in general. How could they implement a flexible, versatile approach for all types of users and different security requirements of interactions and transactions? Versatility definitely helps – having a versatile approach for authentication in place allows to add additional factors and means or replace some factors quickly – which isn’t the case when only one specific mechanism (like RSA SecurID) is used.

What does it mean for RSA? In fact, the “trusted name in security” definitely will suffer – trust takes a long time to build and a very short time to loose. However, RSA isn’t that heavily depending on the success of RSA SecurID anymore. The company has broadend its portfolio. RSA SecurID is sort of a “cash cow”, but there are other approaches including versatile authentication solutions available. And, honestly: If you use RSA SecurID tokens as one element within a versatile authentication strategy, there is a value in that approach – especially given that RSA will do everything to ensure that such a breach never happens again (but there is always some risk of things happening which are unlikely). But it is only one approach of many, and the best thing customers can do is to become more flexible, more versatile.

Interestingly this fits well to research we are currently working on. One part of this will be a research note comparing the strength and usability of different authentication mechanisms, the other has a working title of “why hardware-based OTP is dead” – the latter not mainly because of the potential security risks, but because of the limited usability and the cost of logistics. Thus, have a look at our upcoming research. The topic will as well be discussed at the EIC 2011 (European Identity Conference), May 10th to 13th. Munich.


Virtualization vs. Security

27.01.2011 by Martin Kuppinger

Some days ago, a vendor talked at an analyst meeting about the relationship between virtualization and security. The argument was: At the hypervisor you can combine network security management, server security management and some other aspects of security management – I can’t remember everything. Thus virtualization increases security, because you have one point of control.

Right – as long as you can control what administrators and operators are doing. Unfortunately, that’s not the case in typical virtualization environments. There is no PxM (Privileged Access, Account, Identity, User) Management at all. And in that case, combining everything is a problem, a nightmare from a compliance point-of-view. For sure there is a value in having a single point-of-control, but only if you are able to adequatly control use of this.

I’ve asked the speaker about the solutions around PxM offered by that vendor – there weren’t any.

Without specific virtualization security solutions, PxM being one very important amongst them, there is a virtualization security risk. There is a potential of increasing security by using adequate technology, which is provided by several vendors. But claiming that there is a value of combining a lot of highly elevated administrative actions without being able to manage them doesn’t make any sense.

For a comprehensive overview on what customers expect around virtualization security just have a look at that survey.

And don’t forget to register for EIC 2011 and Cloud 2011.


Lessons enterprises should learn from the recent wiki-leak

17.12.2010 by Martin Kuppinger

There has been a lot of discussion around Wikileaks publishing an incredible amount of data which has been classified as confidential by the US Government. I don’t want to discuss this from specifically – many people have done this before, with fundamentally different conclusions. More interesting is what this means for private organizations, especially enterprises. Wikileaks has threatened some of them: The russian oligopolies, the finance industry in general. That comes to no surprise. Wikileaks founder Assange rates them as “bad”,e.g. his enemies. Given that Wikileaks isn’t alone out there, there is an obvious threat to any enterprise. Some might think that construction plans of the defense industry should be published. Others might think that should be done with blueprints from the automotive industry after claimed incidents. Or with the cost accounting of the utilities if power or gas appears to be too expensive. I don’t want to judge about the reasons – I have my personal opinion on this but that’s out of the scope of this post.

Looking at that situation from an enterprise perspective, it becomes obvious that information security has to move to the top of the CIO agenda (and the CEO agenda!) if it isn’t yet there (and given that the enterprise isn’t willing to share everything with the public – blueprints, calculations, whatever,…). That requires approaches which are somewhat more fine-grain than the once which obviously have been in place in the US government, allowing a private (or something like that, I’n not that familiar with the ranks in the US military) to access masses of documents. It also requires to efficiently protect the information itself instead of the information system only. Information tends to flow and once it is out of the system the system-level security doesn’t grip anymore.

That leads inevitably to the topic of Information Rights Management (IRM) which is a frequent topic in the blogs of Sachar Paulus and me – just have a look at our blogs. However, implementing IRM the typical way in organizations requires using centralized policies, classifications, and so on. And classification obviously failed in the last Wikileaks incident. Thus, I’d like to bring in an idea Baber Amin recently brought up in a discussion during a KuppingerCole webinar. He talked about “identity-based encryption” which in fact means encrypting it in a way which is controlled by the single user. That leads to an IRM where the single user controls who is allowed to use information he creates or owns. It is not (mainly) the organization.

But: Will that work? Some arguments and counter arguments:

  1. Information is not accessible once the user leaves the organization: Not correct, there might be an additional “master” key to allow recovery and so on. Many lessons could be learned from Lotus Notes in that area, to name an example.
  2. There are no corporate policies: Not correct, these could be understood as a second level of protection, adding to the first level managed by the user. E.g. classical IRM and personalized IRM could be combined.
  3. It won’t work because the user doesn’t understand what to do: Not correct. Just look at how users are dealing with information security in their daily live. For sure some things are going wrong and lessons have to be learned (not to appear drunken on a photo in Facebook, for example), but overall that works pretty well. Combined with the corporate policies, that should turn out to be much better than corporate policies only. Trust the employee and the wisdom of crowds.

Simply spoken: Think about doing it different than before. It is not about adding new tools at the (perforated) perimeter and all these point solutions. It is about building few consistent lines of defense, including and especially the next-generation IRM. For sure there is some way to go and tools aren’t there yet. But when thinking about how to protect your intellectual properties and the secrets your organizations wants to have (for whatever reason – I don’t judge here…), you should definitely think beyond the traditional approaches of IT security – look especially at Information Security instead of Technology Security, e.g. the I and not the T in IT.

When you think that this topic is worth to think about, you shouldn’t miss EIC 2011 - the conference on IAM, GRC, Cloud Security and thus also about things discussed in this post. And don’t hesitate to ask for our advisory services ;-)


Security questions for authentication – a ticking privacy time bomb?

30.09.2010 by Martin Kuppinger

We all are familiar with external (and sometimes also internal) websites which require us to pick or define security questions and to provide answer to these questions. What is your mother’s maiden name? Which is your favourite sports team? Which is the color you like most? And so on… These questions are sometimes used as additional means for authentication, for example by PayPal. More frequently they are used for password resets.

These days, when working with my colleagues Sachar Paulus and Sebastian Rohr on a comprehensive piece on strong authentication which will be published soon, we discussed the privacy aspects of all these (more or less strong) authentication approaches – and struggled… The answers on all the typical questions are privacy-relevant data. They unveil some important knowledge about the user. The more questions, the more knowledge. You could argue that this isn’t that sensitive information – but first of all, it is personal data and second, this depends on the questions.

But have you ever seen something around privacy-related disclaimers, buttons to accept the privacy policies of the organization or something like that around these questions? I can’t remember that. That leads to the assumption that probably few people ever have thought about the privacy aspect of these questions – which means that the relevant compliance regulations just have been ignored.

From our perspective, organizations should check where they use such questions and whether they are in sync with the compliance regulations they have to meet. Otherwise such a simple mechanism might become a real issue from the legal perspective.

The website for the European Identity Conference 2011, to be held May 2011 in Munich, is online now.


Services
© 2014 Martin Kuppinger, KuppingerCole