10.12.2012 by Martin Kuppinger
Recently, there was news here and here that a disgruntled technician of the Swiss spy agency NDB (Nachrichtendienst des Bundes) had stolen terabytes of counter-terrorism information shared between the NDB, the CIA, and MI6 (the UK spy agency). The person has been temporary arrested. It is still unclear whether he has already sold some of that information or not.
This case, together with many others like the theft of data from Swiss banks, which then is sold to German tax offices, again highlights that the biggest security risk for most organizations comes from internals. There is no doubt that the number of external attacks is increasing. There is no doubt about a massive risk for critical infrastructures. There is no doubt that also manufacturing and, in general, SCADA devices are at far higher risk than before.
However, there are two important aspects to consider:
- Many internals have privileged access, frequently with a lack of control. They potentially can steal large amounts of data and cause massive harm.
- Many of the external attacks are in fact hybrid attacks, involving internals.
For organizations, this means that they should not focus only on external attacks. The concept of perimeter security is an illusion anyway. There is no such thing as “the perimeter around the organization” anymore. What organizations have to do is to move forward to protect information, regardless from where it is accessed, where it resides, which device is used, and whether it is accessed from internals or externals. Point solutions which claim to solve this issue won’t help without the bigger picture in mind. They just increase the risk of bad investments.
However, there are some things you have to do: Access Governance and Intelligence are one of these things. Privilege Management is another one. However, Privilege Management should be well-integrated with Identity Provisioning and Access Governance/Intelligence instead of being a point solution. The most important thing to do now is to understand the big picture of information security. That’s what you should put on top of your agenda for 2013.
To learn how to best establish Information Stewardship as a principle, you should have a look at our new report “From Data Leakage Prevention (DLP) to Information Stewardship”, #70587, which has been written by my colleagues Mike Small and Dave Kearns.
22.10.2012 by Martin Kuppinger
I remember a conversation I had years back with the person responsible for online banking security at one of the larger banks. The conversation was about secure online banking. I learned that banks are not necessarily willing to go the maximum for security. They simply look at the risk and then decide about what they are willing to invest in online banking security. Given that I’m an advocate for using risk-based approaches in IT security I understand this position.
However I’m still, after all these years, not fully convinced that some of the banks are doing this approach right. The point is that it appears to me that they are too reactive on these things. They calculate the current costs and compare them with the current loss and damage. However, given that the number of attacks is increasing and that it takes quite a while to roll out new security technologies in online banking, it should be about comparing the current costs and the expected (potentially considerably higher) loss and damage. That would change the equation. Good risk management is always proactive.
Why am I writing about this? Just recently I stumbled upon two on-line articles which are about financial institutions with weaknesses in their security approaches for online banking and online trading.
One post was written by Dale Olds, now at VMware and before that a Novell veteran. He wrote about an online trading service which – in earnest, not kidding! – asked for his bank account credentials to act on behalf of him.
The other article was published yesterday on The H, a security website. It talks about what Santander banks stores in cookies and claims that they even sometimes store passwords in plain text in session cookies, in memory at runtime but not on disk. However, a well-constructed bit of malware could access that information at runtime. The German portal heise.de, related to The H, found “only” information like name and customer number in the cookies of the German branch of that bank.
Regardless of what is stored when, what struck me most was the reaction of Santander bank cited in the article of The H:
A Santander spokesperson told The H: “The data items stored within our cookies, if compromised, would not allow access to our online services on their own and our primary login processes do not rely on cookie data. We review the use of our cookies and the data contained within them, and if necessary will review the IDs used by our customers to limit any future risks. We take the security of our customer data very seriously. Customers can change their IDs at any time themselves and are reminded not to use the ‘remember me’ function on public or shared computers.”
There are two points in that statement which really struck me: If there is sensitive information held in cookies, that is at least part of what attackers need to login. Sensitive plain text data always increases the attack surface. A bank shouldn’t downplay that.
The second point is about the last sentence. This is sort of the poor attempt to abdicate from the bank’s responsibility. “Why didn’t the customer change his IDs regularly? Why did he make other mistakes? We are not guilty…”
I’m fully aware that it isn’t that easy to find the balance between cost, security, and usability in online banking and online trading. But it is feasible. And blaming the customer definitely is the wrong approach. Not as wrong as asking for online banking credentials in a trading application, but wrong anyway.
19.10.2012 by Martin Kuppinger
In an article published at Network World Online Richard Stiennon, Chief Research Analyst at a company called IT-Harvest, claims that IT Risk Management inevitably fails in IT. He ends up with recommending “threat management techniques” instead of risk management. He says that it is about making decisions about threats. However, he seems to have a misconception over what risk management is about.
Risks are threats on assets. They have a specific probability and a potential impact. The thesis of Richard Stiennon is based on the assumption that Risk Management mandatorily starts with identifying all assets and assigning value to them. He then states that risk management methods invariably fail to predict actual disasters. He also claims that a consequence of Risk Management is that “protect everything” approaches are deployed.
I don’t agree with that, from my perspective, fairly limited view on how Risk Management works in theory and practice. There is no doubt that you can do things wrong. But the conclusion that Risk Management is the wrong approach because of that is not valid.
I had a conversation with Art Coviello, Executive Vice President at EMC and Executive Chairman of RSA, the security division of EMC. We talked about the strategic approach of RSA, a topic he then covered in his keynote at the RSA Conference Europe the day after our conversation. This conversation was very much about understanding risks in IT as the starting point for successful IT security. It was about especially understanding the changing threat landscape and understanding which types of assets will be in danger. It also was about the consequence of this approach being a shift from traditional “protect all” approaches towards a far more targeted, layered, and thus sophisticated security approach.
The most important point therein is that Risk Management not is about first listing all your IT assets. That can be useful, of course, but it is just an element. By the way: Many IT organizations have a working inventory of IT assets and contract management in place, even while others still struggle with identifying all their applications and systems, not to mention systems or even information owners.
Risk Management is about understanding both: Threats and their potential impact on assets. So first of all, Risk Management does not necessarily start by identifying all assets. It is not an approach which is done once and never re-visited. It is about setting up a process of regularly validating former assessments.
The most important points I have when looking at the statements of Richard Stiennon are
- Risk Management should always take the threats into account – and understanding threats is a good starting point.
- You will have to know about your assets anyway. If you start by analyzing the threats, the next question is about the potential impact on assets.
- You will also need to understand the probability and potential impact of threats. If not, you are not able to identify high risk and low risk threats.
Simply said: Using “threat management techniques” done right and in a way that works is nothing other than Risk Management where you start with looking at the threats – an approach that isn’t uncommon in good Risk Management implementations.
We have done many webinars around Risk Management that touch that point. And there is a report out describing our view on a GRC Reference Architecture. This report is worth to read because it puts all these things into an organizational context. It is clearly about a balance between knowing the assets and understanding the threats.
But regardless of where you start (and I agree that understanding the threats is probably the best starting point): You still need to know about your assets, the probability of attacks/threats, and their potential impact in order to do things right. Only then will you be able to pick a good mix of countermeasures, both organizational and technical, instead of ending up with a “protect everything a little” approach.
17.10.2012 by Martin Kuppinger
BYOD (Bring Your Own Device) is one of the hot topics of today’s IT. Many vendors promise to solve the BYOD challenges, with MDM (Mobile Device Management), MAM (Mobile Application Management), or other technologies. Most of these technologies fix some of the problems. But all of them fail in the great promise of solving all of your BYOD challenges.
Even worse, solving BYOD challenges is not what you should really care about. BYOD is just a symptom of a far bigger evolution. This evolution is about what my colleague Craig Burton just recently called “The Computing Troika” – the three major changes we are facing: Cloud Computing, Mobile Computing, Social Computing. This is about new delivery models for IT. It is about users using new types of devices (and more of them) inside and outside the corporate network. It is about opening up our communication for more external users, including our customers, leads, prospects, and so on. And it’s about communicating with them in a different way.
The situation we are facing today is that we are observing exponential growth in all these areas. We thus need to find other ways to deal with these evolutions.
BYOD in that context is just about supporting new devices for some specific groups of users. In the narrow context of BYOD most vendors have, it is about smartphones and tablets used by employees and maybe some groups of externalusers such as contract workers.
In a broader context, BYOD is about all types of devices which are not owned by the organization but are used to access corporate systems and corporate information. That might be the PC in the home office, the laptop of the auditor, the PC in the Internet Café, or whatever.
However, BYOD is just one part of that story. There is COPE and COD as well. COPE stands for Corporate Owned, Personally Enabled; COD is the acronym for Corporate Owned Device. In both areas, it is about supporting new types of devices like new smartphones, tablets and all the devices which will appear at the market within the next few years that we can’t even imagine today.
So from whatever perspective you look at BYOD, it is just a small subset of a bigger problem. The challenge is what the Computing Troika means for IT in general and Information Security specifically. This is about allowing all types of users using all types of devices access to corporate information and corporate systems, regardless of their deployment model.
If you solve that issue (which you have to do anyway) you have solved the BYOD issue. If you simply solve the BYOD challenge, you haven’t solved much. You probably have invested in some point solutions and some technology that fails in solving the challenges imposed by the Computing Troika. True, some of the approaches might provide value even in the bigger context. But that is something you don’t know for sure when just looking at the isolated BYOD issue, which most likely is not only or not even about BYOD – it might be COPE/COD and not BYOD at all, for instance.
So the simple message is: Don’t start investing in BYOD until you’ve understood the bigger context. Define your strategy for that. Understand what it means for Information Security and aspects like social logins, context-based authentication and authorization, your governance approaches, and so on. If you have got that big picture, you can start picking the technical pieces that help you. And furthermore, you can start solving all the legal, organizational, and governance issues around that.
There has been a KuppingerCole webinar on that topic – have a look at the podcast.
Posted in BYOD
16.10.2012 by Martin Kuppinger
At the end of last week, US Defense Secretary Leon Panetta gave his first major speech on cybersecurity. The speech was given during the Business Executives for National Security meeting in New York. It gained some attention in the news. This concept wasn’t entirely new, as Jon Oltsik pointed out in a post – back in 1998 Deputy Defense Secretary John Hamre cautioned the U.S. Congress about the same topics, using the term “cyber Pearl Harbor” back then as well. On the other hand, in March 2012 the US Cyber Chief talked about a tide of cyber criminality. And even while I stated that tide appears to be the wrong term despite the lack of an ebb tide that also showed that this issue is increasingly well understood.
On the other hand, John Oltsik claims that “almost nothing” had been done since 1998 to actually improve cybersecurity readiness in the critical infrastructure. I disagree with his point. A lot has been done. But we didn’t manage to close the gap between the threats and the cybersecurity readiness. This gap might even have become bigger. When I look at what various governments like the U.S. government or the German government and multi-national institutions like the EU are doing, I see that they have started investing. They also, like other organizations, have understood that this is an immense risk. But things are moving slowly, which is no surprise when governments are involved.
The biggest issue, however, isn’t the governments but all the providers within the critical infrastructure, from utility companies to finance institutions and their technology providers. Back in 2010 I wrote a post titled “Is an insecure smart planet really smart?”. That’s where the big problem is: there are far too many initiatives around making the world “smarter”, which either totally ignore security or underestimate the role security plays in being smart. This is not only true for the big initiatives, but also for industry automation and, maybe even more, for automation within households.
It is important to understand that addressing the threat Panetta described is not only a task for governments. It is a task for every single organization. When looking at Stuxnet and Duqu, some organizations far away from the real targets became an attack target as an intermediary step. We need to rethink our security and to become much better at that.
16.10.2012 by Martin Kuppinger
Yesterday there were two interesting news items about Google. A document issued by 24 of the 27 European Data Protection Councils requests Google to change their privacy policies. They claim that collection of personal data to such an extent as Google does is considered a massive risk for the privacy of users. I can agree. The Councils however don’t consider the policies as illegal, at least not yet. That might change with the upcoming new EU data protection rules in 2014. Nevertheless they request Google to better inform users about the use of their personal data.
I personally think that even that step might be helpful. From my perspective too many users just haven’t understood that there is no such thing as a free lunch. If you don’t pay with money, you have to pay in another currency. And the currency of Google is “privacy” or “personal data”.
It will be interesting to see what happens next. At least Google is facing some pressure from the EU now – and this most likely hasn’t been the last word in that issue.
The second issue Google is facing is about abusing its dominance of Internet search in violation of antitrust laws. An internal draft memo recommends suing Google for that. Again, it will be interesting to observe what happens here next.
Overall, it wasn’t a good day for Google yesterday.
04.10.2012 by Martin Kuppinger
Recently the “Internet Association” has been created. Their claim on the website is “We are the unified voice of the Internet economy”. They then state that they represent the interests of America’s (!) leading Internet companies and their global (!) community of users. The real message follows afterwards: “We are dedicated to advancing public policy solutions to strengthen and protect internet freedom, foster innovation and economic growth and empower users”.
This could also be read somewhat differently: We are the lobbyist organization which will try to avoid everything that can stop us from building our business based on the data we have obtained about our users. We don’t want enforcement of privacy laws. In fact, the Internet Association has been created in response to the DNT (“do not track”) and is targeted against the FTC (Federal Trade Commission) and others who are driving this concept. Do not track signals that the user doesn’t want tracking of his activities. Even while DNT has been the main reason for founding the Internet Association, this is just a symptom of the bigger topic behind.
Whether this empowers their users is more than questionable. It might empower users to use some features for “free”, which in fact is paying in the currency of privacy instead with real money. But it clearly doesn’t empower the users to better control their privacy on the Internet.
When looking at the list of the (as of now) fourteen members of this association, it comes as no surprise that the usual suspects like Facebook, Google, or LinkedIn are amongst them – all of them with a long history of dealing with privacy in a questionable way. On the other hand, Microsoft is not a member of that community.
It is very important to understand what this association is about. The understanding of “internet freedom” is more the one of a robber baron, not the one of democracy. It is about their freedom to do business the way they want, not being hindered by privacy regulations. It is not about the freedom of choice for the users to decide about what happens with their data. My colleague Craig Burton some time ago has written a post in another context – but his claim therein applies here as well:
Freedom of Choice != Your Choice of Captor
The counter argument to this position is that users might want to personalize their online experience, like “tell me when I’m near something I’m interested in”. So tracking in and of itself is not necessarily bad. However, who is doing the tracking and what happens to that data is problematic.
The point is: It is still about freedom of choice. DNT enables the user to make his decisions. Best of all that is done in a way where the user can opt-in for tracking, where he can opt-out, and where he can decide about who can track and who not. Given that there is massive tracking without any information to the user today, DNT technology and regulation is mandatory from my perspective.
Besides this, I’m not really convinced by the results of tracking. Even in the days of big data, we end up with two problems: My historic interest doesn’t necessarily match my current interests. And even when it matches, the proposals made frequently are just weak. To give you an example: Some time ago I had been looking for some new camera lenses on Amazon (I buy a lot via Amazon mainly due to the simplicity and the reliable delivery). I’ve been only looking for FX lenses for the Nikon system (full format sensor). Amazon right afterwards started sending me masses of mails with proposals of lenses for the DX system (APS-C format sensor) and other things I definitely don’t need.
So to sum up:
- I strongly believe in the right of the individual to have freedom of choice. That clearly applies to tracking in the browser and on mobile devices.
- Tracking might be helpful – but despite all efforts and big data technology, the results provided today are at best mediocre (on average).
04.10.2012 by Martin Kuppinger
Adobe warned a few days ago that an internal server with access to its digital certificate code signing infrastructure was hacked. This resulted in at least two malicious files being distributed that were digitally signed with a valid Adobe certificate.
If you take the numbers published by Secunia, a security/patch management software vendor, Adobe ranks pretty high in the list of companies with reported vulnerabilities – especially when taking into account that it is only two core products in the case of Adobe (Adobe Reader and Adobe Flash Player), compared to the broad portfolio of either Oracle or Microsoft. When looking at “genuine vulnerabilities”, Adobe ranks 5th behind Oracle, Apple, Microsoft, and Google. The Secunia analysis also lists the Top 50 software portfolio, with Adobe Flash Player ranking 4th and Adobe Reader ranking 8th. Unfortunately, these are the two programs within the top ten of that list with the highest number of exploited critical vulnerabilities.
Another aspect when looking at Adobe from the security perspective is patch management. In Adobe’s case, this is cumbersome. Furthermore, Adobe has started (with their last patch for the Adobe Flash Player) to install Google Chrome and the Google toolbar without user consent – at least that’s what happened on my system. I had to manually uninstall both components afterwards.
So what we see is a mix of
- a massive number of vulnerabilities
- a disputable approach on patch management
- successful attacks to a critical internal security infrastructure
Does Adobe deal with that situation like customers would expect? You might say “yes” given that expectations might be very low. However, when looking at what we should expect from a professional software vendor, there are massive shortfalls.
Did Adobe inform anyone promptly about the malicious files? No, they didn’t. The issue dates back to early July. Adobe claims that they took immediate internal actions including a clean-room implementation of the code signing infrastructure. Maybe they should have taken actions before, to avoid such attacks or to at least detect it when it happens and not after malicious code appears on the Internet.
I just recently blogged about the security issue in Microsoft Internet Explorer. The Adobe approach to security managementalso falls more obviously in the category of “security by obfuscation”. I don’t think that this is the right way to act, especially in case of a software vendor who provides software that ranks amongst the top ten within the average corporate software portfolio.
Taking all these points, then it is past the time that Adobe should start to act far more professionally in their security management and their patch management. Open and timely information, a simplified patch management methodology, and minimal patches without additional software are the minimum requirements – together with an internal IT security approach that is good enough for today’s “advanced persistent threat” types of attacks.
02.10.2012 by Martin Kuppinger
Recently I stumbled upon a blog post with a title starting with the words “Do security like a start-up…”. That rang my inner alarm bells! When reading the post I became relaxed again. It was about the need for business and IT to work together and the recommendation to look for more generalists rather than specialists – both aspects I fully buy in to even while acknowledging that good generalists are a rare species.
But coming back to the title…
Interestingly the post was published just around the discussion of the severe security issues of WhatsApp. WhatsApp is just another example of a start-up which failed in providing a secure implementation. And WhatsApp is just another example in a long series of start-ups which greatly failed in security.
If “security like a start-up” would be about having all people – business and IT – security-trained, that wouldn’t happen to such an extent. The problem is that start-ups typically don’t act like they are described in the other post, at least not when it comes to security.
Probably the better title would have been: “Start-ups should also apply the strengths of start-ups to security…” Until that happens, you better be careful when it comes to security when evaluating start-ups or actually using their software and services. I have seen too many of them (outside of the security-related start-ups) with a horrible lack of knowledge about security and thus ending up providing inherently insecure software. And these days, where security has become a major concern of everyone from the end-user to the enterprise, that has to change.
24.09.2012 by Martin Kuppinger
The reaction to the security alert for Windows Explorer recently revealed an interesting phenomenon: Many people believe in security by obfuscation. I alerted some people when I first saw the news concerning that security issue. Some reacted by saying: “I like my Apple iBook” or “I’ve use other browsers for a long time”. No doubt, these people are not affected by that Internet Explorer security issue. But the underlying message in these comments is about “security by obfuscation”.
Today I read another news story about iOS 6 which addresses more than 200 security issues, which allow virtually everything from bypassing the pin-based lock to faked SMS sender IDs or code injection. One of these bugs isn’t even explained in detail by Apple. It is about a bug in processing configuration files that allows attackers to claim that there is an important system update which is correctly signed. However it isn’t and thus allows the installation of malware. Without going into detail: iOS has never been really secure, and considering the way Apple deals with security issues and patches the system isn’t leading-edge.
Other browsers also have their weaknesses. And most vendors don’t provide security information in the same (relatively) open way Microsoft does, nor do they have a well-defined patch process.
It is a matter of fact that operating systems and browsers have security weaknesses, and new ones are identified regularly. It is also a matter of fact that the interest of attackers in operating systems and browsers increases with their market share. So there is little reason to assume that you are far more secure when using Apple devices, operating systems, or the Safari browser. There is some reason that you are at less risk when using one of the more obscure browsers. But security by obfuscation doesn’t really make sense.
So instead of sitting back sneering and with a sardonic grin, it is better trying to understand the risks in your own environment. The good thing in Microsoft environments is that at least many users are more aware of risks and don’t try to cheat themselves…
Besides the point that everyone needs to understand that there is no secure software and thus all environments are at risk (and the more success a platform has in the market the higher is the risk), there is another important point to look at: How do the vendors deal with security issues? Do they inform openly? Do they have a quick, reliable, and simple approach to apply security patches? Instead of blaming vendors it should be understood that every security bulletin and press release about new security issues is positive in the sense that it proves that there is a working process for identifying and patching security issues. That is much better than leaving massive security leaks open until the next operating system update. Not having such a process is also part of obfuscation – and it doesn’t help the customer because the potential attackers are communicating about known issues anyway.
For the ones who want to see some numbers: Just download the annual report from www.secunia.com. It is not only and not even mainly about Microsoft anymore. Security risks on Microsoft platforms and in Microsoft software affect a lot of users. But if you look at the numbers for Apple, Adobe or the leading Linux distributions you should be scared. And in contrast to Microsoft, many of them neither inform well about security issues nor have an efficient patch management process in place. So you neither should try to get more secure by obfuscation nor obfuscate your view on the reality of security.