Security in the banking world – still full of (unpleasant) surprises

22.10.2012 by Martin Kuppinger

I remember a conversation I had years back with the person responsible for online banking security at one of the larger banks. The conversation was about secure online banking. I learned that banks are not necessarily willing to go the maximum for security. They simply look at the risk and then decide about what they are willing to invest in online banking security. Given that I’m an advocate for using risk-based approaches in IT security I understand this position.

However I’m still, after all these years, not fully convinced that some of the banks are doing this approach right. The point is that it appears to me that they are too reactive on these things. They calculate the current costs and compare them with the current loss and damage. However, given that the number of attacks is increasing and that it takes quite a while to roll out new security technologies in online banking, it should be about comparing the current costs and the expected (potentially considerably higher) loss and damage. That would change the equation. Good risk management is always proactive.

Why am I writing about this? Just recently I stumbled upon  two on-line articles which are about financial institutions with weaknesses in their security approaches for online banking and  online trading.

One post was written by Dale Olds, now at VMware and before that a Novell veteran. He wrote about an online trading service which – in earnest, not kidding! – asked for his bank account credentials to act on behalf of him.

The other article was published yesterday on The H, a security website. It talks about what Santander banks stores in cookies and claims that they even sometimes store passwords in plain text in session cookies, in memory  at runtime but not on disk. However, a well-constructed bit of malware could access that information at runtime. The German portal heise.de, related to The H, found “only” information like name and customer number in the cookies of the German branch of that bank.

Regardless of what is stored when, what struck me most was the reaction of Santander bank cited in the article of The H:

A Santander spokesperson told The H: “The data items stored within our cookies, if compromised, would not allow access to our online services on their own and our primary login processes do not rely on cookie data. We review the use of our cookies and the data contained within them, and if necessary will review the IDs used by our customers to limit any future risks. We take the security of our customer data very seriously. Customers can change their IDs at any time themselves and are reminded not to use the ‘remember me’ function on public or shared computers.”

There are two points in that statement which really struck me: If there is sensitive information held in cookies, that is at least part of what attackers need to login. Sensitive plain text data always increases the attack surface. A bank shouldn’t downplay that.

The second point is about the last sentence. This is sort of the poor attempt to abdicate from the bank’s responsibility. “Why didn’t the customer change his IDs regularly? Why did he make other mistakes? We are not guilty…”

I’m fully aware that it isn’t that easy to find the balance between cost, security, and usability in online banking and online trading. But it is feasible. And blaming the customer definitely is the wrong approach. Not as wrong as asking for online banking credentials in a trading application, but wrong anyway.


Does Risk Management really fail in IT Security?

19.10.2012 by Martin Kuppinger

In an article published at Network World Online Richard Stiennon, Chief Research Analyst at a company called IT-Harvest, claims that IT Risk Management inevitably fails in IT. He ends up with recommending “threat management techniques” instead of risk management. He says that it is about making decisions about threats. However, he seems to have a misconception over what risk management is about.

Risks are threats on assets. They have a specific probability and a potential impact. The thesis of Richard Stiennon is based on the assumption that Risk Management mandatorily starts with identifying all assets and assigning value to them. He then states that risk management methods invariably fail to predict actual disasters. He also claims that a consequence of Risk Management is that “protect everything” approaches are deployed.

I don’t agree with that, from my perspective, fairly limited view on how Risk Management works in theory and practice. There is no doubt that you can do things wrong. But the conclusion that Risk Management is the wrong approach because of that is not valid.

I had a conversation with Art Coviello, Executive Vice President at EMC and Executive Chairman of RSA, the security division of EMC. We talked about the strategic approach of RSA, a topic he then covered in his keynote at the RSA Conference Europe the day after our conversation. This conversation was very much about understanding risks in IT as the starting point for successful IT security. It was about especially understanding the changing threat landscape and understanding which types of assets will be in danger. It also was about the consequence of this approach being a shift from traditional “protect all” approaches towards a far more targeted, layered, and thus sophisticated security approach.

The most important point therein is that Risk Management not is about first listing all your IT assets. That can be useful, of course, but it is just an element. By the way: Many IT organizations have a working inventory of IT assets and contract management in place, even while others still struggle with identifying all their applications and systems, not to mention systems or even information owners.

Risk Management is about understanding both: Threats and their potential impact on assets. So first of all, Risk Management does not necessarily start by identifying all assets. It is not an approach which is done once and never re-visited. It is about setting up a process of regularly validating former assessments.

The most important points I have when looking at the statements of Richard Stiennon are

  • Risk Management should always take the threats into account – and understanding threats is a good starting point.
  • You will have to know about your assets anyway. If you start by analyzing the threats, the next question is about the potential impact on assets.
  • You will also need to understand the probability and potential impact of threats. If not, you are not able to identify high risk and low risk threats.

Simply said: Using “threat management techniques” done right and in a way that works is nothing other than Risk Management where you start with looking at the threats – an approach that isn’t uncommon in good Risk Management implementations.

We have done many webinars around Risk Management that touch that point. And there is a report out describing our view on a GRC Reference Architecture. This report is worth to read because it puts all these things into an organizational context. It is clearly about a balance between knowing the assets and understanding the threats.

But regardless of where you start (and I agree that understanding the threats is probably the best starting point): You still need to know about your assets, the probability of attacks/threats, and their potential impact in order to do things right. Only then will you be able to pick a good mix of countermeasures, both organizational and technical, instead of ending up with a “protect everything a little” approach.


Relevance of recertification

27.10.2011 by Martin Kuppinger

In a recent briefing with CrossIdeas, the MBO of the former Engiweb, an Italian software manufacturer in the area of Access Governance and Dynamic Authorization Management, they demonstrated an interesting feature: Doing recertifications based on relevance. Recertification of access rights is a key element of regulatory compliance. This is done frequently on a pretty standardized schedule. Doing this once or twice a year is the typical approach. For some specific systems or groups of users, we frequently see that the intervals are shorter, e.g. some risk-oriented approach is not uncommon. However, cynics might say that the main purpose still is to make the auditors happy.

CrossIdeas now has implemented an approach they name “relevance”. Based on several criteria like the number of SoD violations, the system identifies the most relevant users for recertification. Currently it supports six different parameters. The weight of these parameters can be easily changed using sliders. The least relevant users then can be removed – again using a slider – from the result set (a relevance map), leaving only the relevant ones in there. Then recertification can focus specifically on them.

This feature isn’t a full replacement for standard, regular recertification campaigns (which are supported by CrossIdeas IDEAS – the latter the name of their product) as well. Relevance is, from my perspective, a nice concept which brings value to customers because they can easily implement focused recertification campaigns for the most relevant users in addition to standard recertification. That then not only makes the auditor happy, but helps in better mitigating access risks. Not that standard recertification doesn’t help – but there is room for improvement and CrossIdeas has demonstrated an approach to do that which will be available in the new release due later this year.


What is the future of trust?

29.09.2011 by Martin Kuppinger

Trust is a fundamental concept of today’s IT. Security is based on trust.

We have (or better: had, after DigiNotar?) trust that a web server which has a valid SSL certificate is the server it claims to be.

We had trust that RSA SecurID tokens are secure (whích they still are to some degree, but a lower than before).

We have trust that our authentication in the Active Directory is done in a secure way.

We trust the identity provider when using identity federation.

However, especially the first two examples raise the question whether the concept of trust still is a foundation to build on. On the other hand: Are there any alternatives?

I think we will further need to build on trust as a concept. There is no real alternative. However, we need to be much more careful regarding this concept and add to other approaches:

  • Mistrust
  • Risk

Mistrust means that we shouldn’t take things for granted. We might challenge “facts” – e.g. authentication decisions and so on. In fact, mistrust is not really new. We might check the URLs behind links which are suspicious – are they really pointing to eBay, PayPal or whomever they claim to do? We add additional tiers of authentication or stronger authentication mechanisms for sensitive interactions and transactions. But in the light of what happens these days, with more cyber-attacks and even the well-secured, experienced ones like RSA becoming victims of successful attacks, mistrust becomes more important.

That is related to the concept of risk. Risk relates to

  • interactions and transactions performed and the information assets affected
  • the level of mistrust and the “objective”, factual security risks

This relation is fundamental. We need to understand what could happen to our information assets (and the real assets behind them). And we need to understand how much mistrust we need. Based on that we can define what we need beyond the trust we might have today.

Technically, this leads to the need for flexibility and versatility. It’s not about a specific type of solution, it is about the ability to combine multiple technologies (for authentication, fraud detection,…) depending on the risks and the level of mistrust. The bad news however is: Mistrust will increase, trust will decrease, which will make it more complex to achieve an acceptable level of security for specific risks. And some of the concepts – like SSL – are obviously not sufficient by themselves to address today’s and the future’s security challenge. However: SSL++, e.g. SSL plus other approaches, might suit our needs. And approaches like the ones of convergence.io might help us as well in better rating the risks and applying the concept not only of trust but as well of mistrust. And, despite the mistrust we might feel for rating agencies in the finance world, having rating agencies for organizations like CAs we have to trust might be another approach.


The UBS case: Again 2 billion US$ lost due to unauthorized transactions of a trader

15.09.2011 by Martin Kuppinger

Today, the next story about banks failing in managing trading risks hit the news. It remains unclear what allowed the trader to execute unauthorized (and thus most likely illegal) transactions which lead to that loss. However, the Risk Management of UBS obviously failed. By the way: UBS had to annouce that just the day the swiss parlament started a debate about new regulations for the finance industry.

It will be interesting to hear about why that could happen. Did some people co-operate? Did the risk management system specifically for that types of transactions fail? Or has it been an Access Management problem like at SocGen some time ago, where the trader was able to control himself? Whatever the reason is, the incident proves that there is still a long way to go in Risk Management and overall GRC – not only in the finance industry.

GRC is a key task for the C-level management. It needs sufficient funding. It needs support for the organizational changes, to build an organization with a high degree of process maturity and the understanding of the GRC requirements. It needs a strategic approach to integrate Business and IT to optimally support GRC, given that most business relies on IT systems and fraud in these systems causes the most severe harm. It needs an organizational and an IT-architectural approach to be able to manage different regulations and all types of risks in a structured and efficient way.

For the ones thinking about how to move forward in GRC, today’s KuppingerCole webinar might be worth to attend. It won’t answer all questions, but it will provide some valuable hints for moving forward in GRC. For sure, this is a long journey. But I strongly believe that it is feasible to avoid incidents like the one which happened now at UBS – and to mitigate the overall risks for organizations by a strategic GRC initiative (instead of point solutions).


Virtualization vs. Security

27.01.2011 by Martin Kuppinger

Some days ago, a vendor talked at an analyst meeting about the relationship between virtualization and security. The argument was: At the hypervisor you can combine network security management, server security management and some other aspects of security management – I can’t remember everything. Thus virtualization increases security, because you have one point of control.

Right – as long as you can control what administrators and operators are doing. Unfortunately, that’s not the case in typical virtualization environments. There is no PxM (Privileged Access, Account, Identity, User) Management at all. And in that case, combining everything is a problem, a nightmare from a compliance point-of-view. For sure there is a value in having a single point-of-control, but only if you are able to adequatly control use of this.

I’ve asked the speaker about the solutions around PxM offered by that vendor – there weren’t any.

Without specific virtualization security solutions, PxM being one very important amongst them, there is a virtualization security risk. There is a potential of increasing security by using adequate technology, which is provided by several vendors. But claiming that there is a value of combining a lot of highly elevated administrative actions without being able to manage them doesn’t make any sense.

For a comprehensive overview on what customers expect around virtualization security just have a look at that survey.

And don’t forget to register for EIC 2011 and Cloud 2011.


Lessons enterprises should learn from the recent wiki-leak

17.12.2010 by Martin Kuppinger

There has been a lot of discussion around Wikileaks publishing an incredible amount of data which has been classified as confidential by the US Government. I don’t want to discuss this from specifically – many people have done this before, with fundamentally different conclusions. More interesting is what this means for private organizations, especially enterprises. Wikileaks has threatened some of them: The russian oligopolies, the finance industry in general. That comes to no surprise. Wikileaks founder Assange rates them as “bad”,e.g. his enemies. Given that Wikileaks isn’t alone out there, there is an obvious threat to any enterprise. Some might think that construction plans of the defense industry should be published. Others might think that should be done with blueprints from the automotive industry after claimed incidents. Or with the cost accounting of the utilities if power or gas appears to be too expensive. I don’t want to judge about the reasons – I have my personal opinion on this but that’s out of the scope of this post.

Looking at that situation from an enterprise perspective, it becomes obvious that information security has to move to the top of the CIO agenda (and the CEO agenda!) if it isn’t yet there (and given that the enterprise isn’t willing to share everything with the public – blueprints, calculations, whatever,…). That requires approaches which are somewhat more fine-grain than the once which obviously have been in place in the US government, allowing a private (or something like that, I’n not that familiar with the ranks in the US military) to access masses of documents. It also requires to efficiently protect the information itself instead of the information system only. Information tends to flow and once it is out of the system the system-level security doesn’t grip anymore.

That leads inevitably to the topic of Information Rights Management (IRM) which is a frequent topic in the blogs of Sachar Paulus and me – just have a look at our blogs. However, implementing IRM the typical way in organizations requires using centralized policies, classifications, and so on. And classification obviously failed in the last Wikileaks incident. Thus, I’d like to bring in an idea Baber Amin recently brought up in a discussion during a KuppingerCole webinar. He talked about “identity-based encryption” which in fact means encrypting it in a way which is controlled by the single user. That leads to an IRM where the single user controls who is allowed to use information he creates or owns. It is not (mainly) the organization.

But: Will that work? Some arguments and counter arguments:

  1. Information is not accessible once the user leaves the organization: Not correct, there might be an additional “master” key to allow recovery and so on. Many lessons could be learned from Lotus Notes in that area, to name an example.
  2. There are no corporate policies: Not correct, these could be understood as a second level of protection, adding to the first level managed by the user. E.g. classical IRM and personalized IRM could be combined.
  3. It won’t work because the user doesn’t understand what to do: Not correct. Just look at how users are dealing with information security in their daily live. For sure some things are going wrong and lessons have to be learned (not to appear drunken on a photo in Facebook, for example), but overall that works pretty well. Combined with the corporate policies, that should turn out to be much better than corporate policies only. Trust the employee and the wisdom of crowds.

Simply spoken: Think about doing it different than before. It is not about adding new tools at the (perforated) perimeter and all these point solutions. It is about building few consistent lines of defense, including and especially the next-generation IRM. For sure there is some way to go and tools aren’t there yet. But when thinking about how to protect your intellectual properties and the secrets your organizations wants to have (for whatever reason – I don’t judge here…), you should definitely think beyond the traditional approaches of IT security – look especially at Information Security instead of Technology Security, e.g. the I and not the T in IT.

When you think that this topic is worth to think about, you shouldn’t miss EIC 2011 - the conference on IAM, GRC, Cloud Security and thus also about things discussed in this post. And don’t hesitate to ask for our advisory services ;-)


IBM acquires OpenPages – and proves our GRC vision

16.09.2010 by Martin Kuppinger

It is always nice when trends an analyst has predicted become reality. I’ve been talking and blogging a pretty long time about the need for an integrated GRC approach, especially beyond the isolated “Enterprise GRC” with little automation. Yesterday, IBM announced that they agreed to acquire OpenPages, one of the most prominent vendors in the Enterprise GRC space. That isn’t really a surprise, given that IBM is investing in the GRC market for quite a while. The really interesting parts in the presentation given by IBM on this acquisition yesterday are the parts where the Enterprise GRC layer of OpenPages becomes integrated with the IT GRC tools of IBM, as well Business Analytics as many Tivoli tools. With other words: It is about integrating different layers of GRC to provide a more complete and current (through automation) view on the controls.

That fits well into our expectations as well as to the KuppingerCole GRC Reference Architecture. Successful GRC is based on a mix of manual and automated controls. I remember a conversation with the OpenPages executives where they in fact denied the need for such an integration. Right now, becoming a part of IBM, that seems to change fundamental, because the IBM strategy is about this integration, with a strong layer on top for the executive view.

While some vendors like MetricStream are pushing this approach and others like RSA/EMC with their Archer acquisition in January 2010 have the same potential, it will be very interesting to observe how other “Enterprise GRC” vendors (I still believe that this is an arrogant term as long as these solutions ignore most parts of the enterprise and are mainly a high-level solution focused on manual controls with little integration into the different other GRC layers) will react. With the IBM acquisition of OpenPages, the time where a vendor can ignore the integration of GRC at all levels are past. Thus, this acquisition will heavily influence the overall GRC market and some of the more prominent “Enterprise GRC” players might end up at the loser’s streak.


How much security do we need?

04.02.2010 by Martin Kuppinger

My colleague Jörg Resch blogged today about the ignorance regarding layered security approaches. Yes, there is no absolute security. Security is something which is tightly related to risk. Given that we can’t have the perfect security, especially not with people using systems, it’s always about the balance between the security-imposed risk and the cost of risk mitigation.

That’s a very simple balance: The higher the risks are the more you can and should spend on risk mitigation – as long as risk mitigation is feasible (which is not always the case – a life insurance doesn’t help you mitigating the risk of dying…). I thoughtfully used the term “security-imposed risk”. It is NOT about security risks, but about the consequences of security-related incidents. Stolen data and their abuse, illegal transactions, customer loss due to a decrease in credibility,… – that’s what it is about.

But that doesn’t change the fundamental: When thinking about security we have to think about risks. I’ve blogged about Risk Management before. What we have to understand is that there is not THE information or system which has to be protected. We have different types of systems, information, and transactions which are at different risk. And we have to apply security (technology and organization) according to the risk associated with these different systems, information, and transactions.

There is not THE level of security you need. You need appropriate security for different types of transactions and interaction (and the related systems). Using risk as the main criteria in decisions about security investments helps to optimize what is done in IT security. And focusing on few consistent approaches at different levels (for example few different types of authentication with step-up features and so on, based on a versatile authentication platform; for example a consistent authorization strategy with few consistent levels of management and protection) will be much cheaper than spending too much money for point solutions like many (not all) of the DLP tools out there.

Understanding that different types of interactions and transactions have to be protected differently is the key to succesful IT security concepts. Risk is the core criteria to do that. Interestingly, that is not really new. What governmental and military organizations are doing in “information classification” (having started long before the invention of the computer) is nothing else than using risk as a criteria and definining different levels of protection for different interactions and transactions. Such concepts don’t have to be extremly complex. But a differentiated view has to be the guideline for everything which is done in IT security.

To learn more about this and to discuss this with your peers, have a look at our upcoming virtual conferences and our European Identity Conference 2010.


The risk of costs

28.01.2010 by Martin Kuppinger

There is a constant pressure not only on IT but all areas of organizations to reduce costs. However, that frequently ends up with higher risks and potentially higher costs due to these risks. The problem is: Most organizations, especially in controlling and management, think much more about cost than risk. But cost savings (which are not necessarily negative) without a risk view are a risk – somewhat of a tautology, I know…

That is why Risk Management should be a standard and central element in management, as well for business as IT.

Read the rest of this entry »


Services
© 2014 Martin Kuppinger, KuppingerCole