What is the future of trust?

29.09.2011 by Martin Kuppinger

Trust is a fundamental concept of today’s IT. Security is based on trust.

We have (or better: had, after DigiNotar?) trust that a web server which has a valid SSL certificate is the server it claims to be.

We had trust that RSA SecurID tokens are secure (whích they still are to some degree, but a lower than before).

We have trust that our authentication in the Active Directory is done in a secure way.

We trust the identity provider when using identity federation.

However, especially the first two examples raise the question whether the concept of trust still is a foundation to build on. On the other hand: Are there any alternatives?

I think we will further need to build on trust as a concept. There is no real alternative. However, we need to be much more careful regarding this concept and add to other approaches:

  • Mistrust
  • Risk

Mistrust means that we shouldn’t take things for granted. We might challenge “facts” – e.g. authentication decisions and so on. In fact, mistrust is not really new. We might check the URLs behind links which are suspicious – are they really pointing to eBay, PayPal or whomever they claim to do? We add additional tiers of authentication or stronger authentication mechanisms for sensitive interactions and transactions. But in the light of what happens these days, with more cyber-attacks and even the well-secured, experienced ones like RSA becoming victims of successful attacks, mistrust becomes more important.

That is related to the concept of risk. Risk relates to

  • interactions and transactions performed and the information assets affected
  • the level of mistrust and the “objective”, factual security risks

This relation is fundamental. We need to understand what could happen to our information assets (and the real assets behind them). And we need to understand how much mistrust we need. Based on that we can define what we need beyond the trust we might have today.

Technically, this leads to the need for flexibility and versatility. It’s not about a specific type of solution, it is about the ability to combine multiple technologies (for authentication, fraud detection,…) depending on the risks and the level of mistrust. The bad news however is: Mistrust will increase, trust will decrease, which will make it more complex to achieve an acceptable level of security for specific risks. And some of the concepts – like SSL – are obviously not sufficient by themselves to address today’s and the future’s security challenge. However: SSL++, e.g. SSL plus other approaches, might suit our needs. And approaches like the ones of convergence.io might help us as well in better rating the risks and applying the concept not only of trust but as well of mistrust. And, despite the mistrust we might feel for rating agencies in the finance world, having rating agencies for organizations like CAs we have to trust might be another approach.

Microsoft acquires BHOLD technology assets

23.09.2011 by Martin Kuppinger

Today Microsoft announced that they have acquired technology assets from BHOLD, a dutch vendor of Access Governance technology. Microsoft thus now owns technology which has been missing in their IAM portfolio until now. Microsoft thus enters the Access Governance market. Whether that will happen through enhancements of their existing FIM 2010 product or by adding another product based on the BHOLD technology hasn’t been announced yet. Anyhow, the deal will change the Access Governance market, particularly regarding the offerings which are targeted to complement Microsoft FIM.

KuppingerCole will follow up on this news and provide further information as soon as it is available. Overall, this acquisitions proves that Microsoft continues investing in the broader IAM space and thus rates this market segment as important to their customers. For existing BHOLD customers, the acquisition provides new opportunities given that they are working with a much bigger vendor now. However, the impact on existing customers can be rated first when the Microsoft roadmap is unveiled. In general we recommend existing BHOLD customers to stay calm until more information is available. For customers investing or planning to invest into FIM 2010, the acquisition is definitely good news because it means that FIM will grow beyond the somewhat technical approach into a more business-oriented solution over time. However, without the roadmap being unveiled it is hard to predict when Microsoft customers really will benefit.

Who are the good guys – the one that keep you informed about security issues or the others?

20.09.2011 by Martin Kuppinger

I understand the reason behind – but it is still contradictory. People expect IT vendors to quickly inform them about security issues. And people then blame them for the security issues. OK, if there are security issues which affect someone, he has some reason to blame the company responsible for these. Nevertheless, some more fairness would help in achieving even more openness. If you have to admit a security issue and you fix it, then this is obviously better than just trying to hide what has happened.

Let’s take some examples. Microsoft has been bashed for years for not doing even to secure its products. They have built a sophisticated system for patching and informing the public. They are very open regarding security weaknesses. But they are still blamed for being insecure. Apple is much more reluctant in its openness regarding security issues. But they aren’t blamed as much as Microsoft. Fair or unfair? I personally prefer the Microsoft approach – Microsoft has been amongst the first to provide a patch for the DigiNotar case. It took Apple much longer.

The DigiNotar case is my second example. Today the news of bankruptcy spread the news, after DigiNotar had to admit that their root CA (Certificate Authority) became hacked. The bad thing is that it looks like DigiNotar knew about that way before. They didn’t inform the public. Good or bad? I opt for bad – they severly increased the security risks in the entire Internet.

RSA Security is another example. They informed the public about the hack of the RSA SecurID seeds. They informed their customers. And they got blamed. I believe that the RSA approach is far better than the DigiNotar approach. Customers were informed and thus able to react. RSA spend a lot of money for helping customers to address their issues.

We can blame all, Microsoft, Apple, DigiNotar, RSA, and all the others not mentioned for security bugs. I remember a professor of informatics calculating back in the 1960’s that starting with a defined (relatively low) number of lines of code there is no chance to avoid bugs. Thus, security bugs in code and security weaknesses in IT environments are somewhat “natural”. And, by the way, it’s always a question of how much you invest in attacks to succeed. There is no absolute security. RSA did a lot to secure the seeds, knowing that they are the biggest risk (and every RSA SecurID customer could and should have known of that “single point of failure”). DigiNotar, from what I’ve heard, didn’t do as much. Microsoft has invested massively in improving security, but still is on a long-year journey for better code and so on.

At least, it is a difficult balance. Openness can’t be an excuse for security issues. But openness is better than fuzzing around or hiding security issues. Openness allows the customers to evaluate their risks and to act. And risks are better than uncertainty, which is the result of not being open around security issues. You can avoid risks – but it’s hard to deal with uncertainty.

The UBS case: Again 2 billion US$ lost due to unauthorized transactions of a trader

15.09.2011 by Martin Kuppinger

Today, the next story about banks failing in managing trading risks hit the news. It remains unclear what allowed the trader to execute unauthorized (and thus most likely illegal) transactions which lead to that loss. However, the Risk Management of UBS obviously failed. By the way: UBS had to annouce that just the day the swiss parlament started a debate about new regulations for the finance industry.

It will be interesting to hear about why that could happen. Did some people co-operate? Did the risk management system specifically for that types of transactions fail? Or has it been an Access Management problem like at SocGen some time ago, where the trader was able to control himself? Whatever the reason is, the incident proves that there is still a long way to go in Risk Management and overall GRC – not only in the finance industry.

GRC is a key task for the C-level management. It needs sufficient funding. It needs support for the organizational changes, to build an organization with a high degree of process maturity and the understanding of the GRC requirements. It needs a strategic approach to integrate Business and IT to optimally support GRC, given that most business relies on IT systems and fraud in these systems causes the most severe harm. It needs an organizational and an IT-architectural approach to be able to manage different regulations and all types of risks in a structured and efficient way.

For the ones thinking about how to move forward in GRC, today’s KuppingerCole webinar might be worth to attend. It won’t answer all questions, but it will provide some valuable hints for moving forward in GRC. For sure, this is a long journey. But I strongly believe that it is feasible to avoid incidents like the one which happened now at UBS – and to mitigate the overall risks for organizations by a strategic GRC initiative (instead of point solutions).

© 2015 Martin Kuppinger, KuppingerCole