How to identify attacks? Know your enemies – and what they already might do.

26.05.2014 by Martin Kuppinger

In a panel discussion I had at EIC 2014 with Roy Adar, Vice President of Product Management at CyberArk, Roy brought up an interesting number: according to research, attacks start on average 200 days before they are detected. Taking into account the Gaussian distribution behind this average, some attackers might have been active for years before they were detected. And who knows whether all of them are detected at all.

How to react to this? There are several elements in the answer. Protect your systems with various layers of security. Use anti-malware tools, even while they won’t catch every malware and every attacker. Encrypt your sensitive information. Educate your employees. These and other “standard” actions are quite common. But there is at least one other thing you should do: analyze the behavior of users in your network.

I do not mean user tracking in the sense of “do they do their job” (which is hard to implement in countries with strong worker councils), I’m talking about identifying anomalies in their behavior. Attackers are characterized by uncommon behavior. Users might access far more documents than average or than they did before. Accounts might be used at unusual times. Users might log in from suspicious locations. Sometimes, it is not a single incident, but a combination of things, eventually over a longer period of time, which is typical for a specific form of attack, especially in the case of long-running APTs (Advanced Persistent Threats).

There is an increasing number of technologies available to analyze such patterns. Standard SIEM (Security Information and Event Management) tools are one approach, however analysis of anomalies might be difficult to perform based on rules. However, there is an increasing number of solutions that rely on more advanced pattern-matching technologies. These can, based on specific mathematical algorithms, turn log events and other information into patterns (in fact complex matrices), and analyze these for anomalies. There might be some noise in the sense of false negatives in the results, but this is true for rule-based analytics as well. Combination of such analytical technologies can make a lot of sense – if you bring together specialized analytics for areas such as Privilege Management (for instance, CyberArk’s PTA), User Activity Monitoring, pattern-based analytics, and traditional SIEM, you might learn a lot about these anomalies and hence about the attacks that are already running and the attackers behind them.

From our perspective, all this is converging into a new discipline we call Real-Time Security Intelligence (RSI). There is a new report out on that topic. I also recently wrote another post on RSI.

Even while you might feel it being too early to move towards RSI, you should put your focus on how to learn more about the attackers that are already inside your network. Understanding anomalies and patterns with new types of analytical technologies might help.


Real-time Security Intelligence – more than just “next generation SIEM”

14.03.2014 by Martin Kuppinger

Recently  a spotlight has been shed on the need for investing in Information Security solutions. The increase in cyber-attacks, the consistently high level of internal challenges, the appearance of more sophisticated types of long-running attacks (sometimes called Advanced Persistent Threats or APTs), the concerns regarding cyber-security following the Snowden revelations, the permanent challenge of dealing with Zero Day attacks leaving no time between becoming public and attacks happening: All this has led to an understanding for the need of better solutions.

Organizations have to assume that the attacker is already in their network. Every organization and every user is a potential target for attackers. On the other hand, with the increasing sophistication of attacks, it is becoming more difficult to identify the attackers. Finally, there is no such thing as the single perimeter anymore where organizations can place their security systems to prevent external attackers from entering the network. They might already have found their way via mobile devices, they might attack cloud services, etc. Complexity is increasing.

We see a new category of solutions evolving in the market that promise to help customers better solve these challenges. First, though, let’s look at current solutions which are not sufficient.

Standard IDS/IPS (Intrusion Prevention/Detection Systems) in their concept as edge devices are obviously limited when there is no such well-defined perimeter. They also are limited when it comes to complex attack scenarios, involving a number of systems.

SIEM (Security Information and Event Management) is still, typically, a tool-driven approach that requires heavy customization. Unless you are able to configure these systems correctly, they will not deliver on your expectations in the setup of, for example,  an SOC (Security Operations Centre). When it comes to taking more and more real-time information into account for the analysis, they might show limitations regarding their scalability.

Next Generation Firewalls again are an edge device, suffering from the conceptual limitations of such devices.

Services providing real-time security information  - regarding newly detected zero day attacks, for instance – deliver valuable information, but they don’t fix the problem. Furthermore, they do not provide the analysis of what is happening in the internal infrastructure.

Recently, though, we have observed a growing number of vendors moving towards integrated methods for Real-time Security Intelligence, combining various technologies and services:

  • Big Data analytics, enabling the analysis of large amounts of data, based on both rules and patterns;
  • Support for both real-time analytics and historical analysis, which can facilitate identifying new events as being related to those that occurred sometime in the past;
  • Integration to existing sources of information, including SIEM tools;
  • Integration with real-time security information services that provide up-to-date information about newly detected security challenges;
  • Services that provide automatic updated rules and patterns for analytics, i.e. configurations that reduce the need for customers to manually keep the configuration of the Real-time Security Intelligence systems up-to-date;
  • Services that support customers with analytics, i.e. expert services supporting the customer’s SOC;
  • Integration with IT GRC solutions, translating the identified challenges into risk information visible in dashboards for IT and business people.

Real-time Security Intelligence will become a mix of services and software. It will combine various offerings that exist today, but are separate from each other. It will allow customers to get a better insight into what already is happening in their networks and what currently is going on. Some vendors even provide the capability of changing network configurations, based on their analytical services.

We expect to see rapid evolution in this area, with further services to be added. A strong potential is in integrating network configuration management systems with Real-time Security Intelligence, allowing firewall settings, for example, to be changed on the fly. Another example is integration with SDCI (Software Defined Computing Infrastructures) to adapt the configuration of networks, storage, and virtual machines when new security challenges are identified, to automatically and dynamically minimize the attack surface.

This evolution towards Real-time Security Intelligence that we observe as of now, has some vendors focusing more on Big Data security analytics while others put more emphasis on online services, but this is just scratching the surface. There will be fundamental changes in the way we do security and we run SOCs, going well beyond just being “Next Generation SIEM”.

Learn more about Real-time Security Intelligence and how to successfully deal with your cyber security challenges at the upcoming EIC 2014. And don’t miss our upcoming webinar on “Mitigate targeted attacks with privileged account analytics” – not about Realtime Security Analytics primarily, but about one approach on mitigating the risks of becoming a victim of targeted attacks.


Why Apple’s culture of secrecy is your biggest risk in BYOD

27.02.2014 by Martin Kuppinger

The news of the bug in Apple operating systems has spread this week. As Seth Rosenblatt wrote on cnet, Apple’s culture of secrecy again has delayed a security response. While there is a patch available for iOS, the users of OS X still have to wait.

I have written before about the risks Apple’s culture of secrecy imposes for users. There are two major issues:

  • Apple does not inform either adequately or in a timely manner about security issues. Doing that is mandatory, including providing detailed information about workarounds and patches.
  • Apple still does not have an adequate patch policy in place.

It is well worth reading Ropsenblatt’s article, as it provides a number of examples for the consequences Apple’s culture of secrecy has from a security perspective. I can wholeheartedly agree with his final paragraph:

“With its history of lengthy response times to critical security problems, Apple is equally long overdue for a serious re-evaluation of how they handle their insecurities.”

However, the culture of secrecy is just a consequence of Apple’s “we are the best and don’t make errors” hubris – a long tradition of Apple. They positioned themselves as the counterpoint to the error-prone Microsoft Windows products a long time ago. While Microsoft has learned its lessons in software quality, patch management, and security response and patching, Apple did not. Apple has to learn that continuous improvement and a good approach to security response and patching is required for any vendor, even Apple.

This attitude of Apple also impacts the risk evaluation of BYOD strategies. If you can’t trust the vendor, you have to protect yourself. So what can you do, if you do not want to simply ban Apple devices until Apple provides an enterprise-class approach on security responses and patching?

The simple yet expensive answer is: Invest in additional BYOD security measures. There are various options out there, none of them being the “holy grail” for mobile security. However, if you combine information- and identity-centric approaches for security with mobile security, you should be able to better know and mitigate your risks. Unfortunately, doing that means spending even more money to secure expensive hardware without an added value. That’s a high price to pay for the users being allowed to use Apple devices.

There will be a price to pay in terms of restricted use. This might be by limiting access from insecure apps (and there are some that are affected by the current bug) or by temporary access restrictions in case of newly detected bugs, unless these are fixed. There might be a need for relying on other, more secure apps, for instance for accessing e-mail, instead of the built-in apps. As always: there is a price to pay. If you don’t want to carry the risk Apple puts on you with its inadequate security policy, you have to invest in security and you will have to restrict use of these devices, impacting user’s convenience.

Unless Apple changes its security culture and overall attitude of “we are the best and don’t make errors”, the advice must be: don‘t trust any organization that relies on a culture of secrecy. And care for security yourself.


Posted in Information Security | Comments Off

KuppingerCole Predictions and Recommendations 2014

19.12.2013 by Martin Kuppinger

On Monday this week, we have published the KuppingerCole Predictions and Recommendations for 2014. They differ from other publications of people looking into the crystal ball in one important aspect: we not only provide our predictions, but also recommendations. More on that below.

Information Security is in constant flux. With the changing threat landscape, as well as a steady stream of innovations, demand for Information Security solutions is both growing and re-focusing. Based on new offerings and changing demand, KuppingerCole predicts several major changes in the Information Security market. KuppingerCole specifically identified the following areas where we see massive change in 2014:

  • Software Defined Networking (SDN) – Software Defined Computing Infrastructures (SDCI)
  • Integrated Real-time Network Security Analytics
  • Cloud IAM (Identity and Access Management)
  • Digital, Smart Manufacturing & Smart Infrastructure: ICS & SCADA
  • API Economy
  • IoEE (Internet of Everything and Everyone)
  • BYOI (Bring Your Own Identity) and Biometric Authentication
  • Big Data
  • Cloud Service Provider Selection and Assurance
  • Ubiquitous Encryption

The document provides both predictions and recommendations. The latter focus on how organizations should react to the changes we predict. It is not always best to jump on every trend and hype – in many cases, it is about defining strategies first and performing organizational changes before starting to implement new types of technologies. Do not rely on predictions only.

Have a look at our document. The best place to learn more about these topics is the upcoming European Identity and Cloud Conference (EIC) 2014, Munich, May 13th to 16th. And don’t miss all our current and upcoming research on these topics.


Posted in Information Security | Comments Off

Smarter Security Spending

30.04.2013 by Martin Kuppinger

On Thursday, I was moderating a panel discussion at infosecurity Europe (InfoSec), the leading UK security fair, which hosts a program of keynotes and panel discussions. My panel was titled “Smarter security spending: Optimising spend without exposing the business”. Panelists were Dragan Pendić, Chief Security Architect, Global Information Management and Security, at Diageo; Michelle Tolmay, Security Officer, ASOS; Cal Judge, Information Security Head, Oxfam; and Graham McKay, CISO, DC Thomson.

We had a very interesting, well-attended session with some interesting questions during the Q+A following the panel discussion. Key take-aways for smarter security spending we came upon during the discussion were

  • People
  • Common Language
  • Risk
  • Big Picture

Getting the users on board was one of the most important themes of the discussion. Without increasing involvement and understanding of people for Information Security, it is hard to get the buy-in and support you need, from both management and the end users. This is an important element within what KuppingerCole calls Information Stewardship.

Involvement of people is tightly related to the need of a common language – talking in business terms instead of tech talk. Information Security is about the I in IT, not primarily the T – business is interested in protecting information, not technology. The latter is just a means to protect information.

For that common language, the concept of “risk” is of central importance. Business thinks in risks. Managers are used to basing their decisions on risk. Mitigating and taking risks is part of their daily job. Risks also help in moving IT from the role of the notorious naysayer to the business enabler. If business requests a service, instead of pointing at all the technical challenges and no-gos, it is better to show some options, their benefits, their cost, and the associated risks. That enables the business to make informed decisions.

Risk, on the other hand, is the foundation for smart spending when investing in Information Technology – the T in IT. Understanding the risk mitigation impact of such technology and the benefit for the business helps in making better decisions. It helps in moving from point solutions and decisions made in “panic mode” after an incident towards structured, well-thought-out decisions based on the best risk/reward ratio (RRR). This always includes understanding the big picture – how do new solutions fit into the bigger picture? Smart spending requires a smart balance between defining and understanding the big, strategic picture and tactical steps towards this that provide the best RRR.

To learn more about that, join us at EIC 2013 – the European Identity and Cloud Conference, Munich, May 14th-17th. Starting with my opening keynote, the topics discussed in that Infosec panel will play an important role throughout the entire conference.


Posted in Information Security | Comments Off

The value of information – the reason for information security

18.07.2012 by Martin Kuppinger

If you’ve ever struggled with finding the argument for an investment in information security, here it is: According to a survey recently published by Symantec, 40% of the worth of organizations is derived from the information they own. The link goes to a German site and the extract of that survey specific to Germany but the report is in English. The global version can be found here. There are other interesting numbers: 57% of the German respondents expect a loss of customers and 48% brand damage in case of a leak of information (and breach notification). The global numbers aren’t that different. On a global basis, information is estimated to be 49% of the organizations total value, while 49% expect loss of customers and 47% brand damage in a data leak event.

These are numbers that help to argue better with business managers. They also prove what we’ve been observing over the past few years: Information Security is a hot topic again. Business cares about information security (and notably not about “technology security” – it’s about the I in IT, not the T). And thus, business needs information security. One of the reasons is simply that some years ago when sensitive or valuable data leaked this was only mentioned on page 7 or so of a computer magazine. Nowadays you might make it to the opening headline of the daily news on TV, or the business newspapers (Wall Street Journal, Financial Times, etc.).

Numbers like the ones from the Symantec report help in showing the value of Information Security investments, by first showing that it is about information security and then showing the potential impact of leaks and breaches to the business. The numbers also clearly indicate that this “IT risk” of leaking information is about business risks: Operational risks, reputational risks, and even strategic risks, if you lose too many customers or damage the brand too much – or if you’re competitor gains access to your most valuable intellectual properties.

There is a good reason that information security is one of the two key drivers for what we at KuppingerCole have worked out as the KuppingerCole IT paradigm, our approach on structuring IT to deal with the fundamental changes like Cloud Computing, Social Computing, and Mobile Computing and to deliver what business really wants:

  • Business wants the (IT) services they really need when they need them – and they want to order business services, not technology services for which they then wait endlessly for IT to deliver
  • Business wants their information secured appropriately – this is where information security comes into play and, over the past few years, became a real concern of business managers

There is a comprehensive report on this KuppingerCole IT paradigm available with some additional KuppingerCole Scenario reports like “The Future of IT Organizations” diving deeper into the details.


Saying that others are wrong doesn’t make a mobile OS secure

30.11.2011 by Martin Kuppinger

Recently, Chris DiBona published a comment (or blog or whatever it is) at Google+ bashing at a lot of companies and people in the industry. He starts with “people claiming that open source is inherently insecure and that android is festooned with viruses because of that and because we do not exert apple like controls over the app market.” Further down he claims that no major cell phone has a virus problem like Windows or Mac machines. There are some other harsh statements in the article, especially about vendors in the security space being charlatans and scammers.

Not surprising that there has been a flood of press releases and other types of responses by vendors of anti-virus, anti-malware, and other types of security tools.

If you look at the facts, then from my opinion some things are evident:

  • Every type of software is potentially insecure – that includes closed source and open source
  • There are better and worse approaches to deal with security flaws – and that doesn’t relate to software being open source or not
  • There is malware attacking Android devices and the number of known issues is growing
  • There are different approaches to marketplaces like the ones for Android and iOS – however even open marketplaces could use independent test and certification approaches increasing security
  • Yes, vendors are trying to earn money with security solutions for mobile devices and there is marketing in

However, the essential point is: There are security risks and instead of bashing on others the goal should be to mitigate risks. That needs to be done before the security issues become too big. Saying that “If you read a report from a vendor that trys to sell you something based on protecting android, rim or ios from viruses they are also likely as not to be scammers and charlatans.”, to quote again Chris DiBona, is absolutely misleading. The problem might not be as big as some marketeers try to tell today – but there is an malware problem and there is a need to deal with it. Not saying that anti-malware on mobile devices is the best choice to solve the problem… And yes, Chris DiBona isn’t correct in saying that these usually aren’t viruses but other types of malware. That’s splitting hairs! So, instead of playing down things, it’s about understanding current and upcoming risks, security needs, and then acting on that – regardless of providing open source or closed source.

I personally believe that its worse to play down security issues than trying to identify and address the issues. And if someone uses the wrong term (like “virus” for something that isn’t a virus), OK – that happens and virus is sort of a term used commonly wrong. But it doesn’t change the fundamental facts: There are security risks for mobile devices. Thus users have to react. Oh, and by the way: I thought we ended these religious “open source or not” discussions at least five or ten years ago. There is no value in these discussions. There is only value in providing better software.

And when talking about Android, looking at the way it uses information I just can state that it is not the best example for “fair information practice” (carefully spoken). Information security is not only about malware and the likes, it is about the way systems deal with information overall. With respect to the way Android deals with GPS locations, SSIDs of available WLANs, and other information, just have a look here (to give you just one example, there is more to be found at YouTube). So again, Google: Do your homework first before you start bashing at others.


Posted in Information Security, Mobile Security | Comments Off

SAML, SCIM – and what about authorization?

16.11.2011 by Martin Kuppinger

Cloud Computing is just another delivery model for IT services. However, due to the specifics of cloud services like multi-tenancy and many others, requirements sometimes are even higher than for on-premise services. One of these requirements in well-architected IT environments and for well-architected applications is the ability to externalize security. That includes relying on external directories for administering and authenticating users, e.g. on Identity Providers. It might include the capability of “cloud provisioning”, e.g. receiving changes of users – even while I clearly favor federation as loosely coupled approach over provisioning. It should include the support for external logs, event monitoring, and so on – unfortunately that appears to be a topic where noone is really working on.

And it should include the capability of managing authorizations in cloud services based on centrally (on-premise or using a cloud service – but centrally and not per cloud service!) managed policies. There is limited value in federating users and than doing all the administration work per cloud service using the cloud service’s proprietary management GUIs or APIs. However, authorization is where the problem really starts.

There is a standard for distributed, dynamic authorization management out there: XACML, the eXtensible Access Control Markup Language. It allows to describe the rules. It allows to work with different repositories for identity information (PIPs, Policy Information Points) and other information required for authorizations, it provides interfaces to custom and standard applications, and so on. However, I haven’t seen XACML in the cloud until now. Unfortunately, I also haven’t seen any real alternative to XACML.

Some might claim that SAML might do that job. There is the SAML Authorization Decision Query as part of the SAML 2.0 standard. But that leads pretty quickly to SAML/XACML interoperability and things like the SAML 2.0 profile of XACML. In fact, if it is about having a consistent set of policies expressed in a common standard, XACML is what we need. We need to define and manage these policies consistently per organization, not per service. Services should request authorization decisions – at least in an ideal world. However, when looking at the cloud, there comes another aspect into play: Performance. Performance is a general issue when externalizing authorization decisions. For cloud services which have to ask many different authorization “engines”, it is an even bigger issue. And there is the issue of latency, which is a factor in cloud environments due to the geographical distances you might find there.

Thus, while XACML is fine for defining policies, the interesting question is: Should cloud services ask external authorization engines per authorization decision? Or is it the better way to update the relevant XACML policies at the cloud service and do authorization decisions there? However, then we will still need a way for efficiently accessing the PIPs for other attributes required to perform the authorization decision.

I don’t have the full answer. However I’m convinced that XACML is a key element for authorization in the cloud, given that it is the standard for externalizing authorization decisions. But it might need some enhancements to optimally work for cloud security as well. It definitely will need improved security architectures for cloud services themselves to externalize authorization decisions and to rely on centrally managed policies. And it definitely needs some thinking about the overall security architecture for cloud services. So I’m looking forward to comments on this post – maybe I’ve missed something and everything is there; maybe this initiates some enhancements to standards. I don’t know but I’m really curious.


How to deal with Data Sprawl? Could a sticky policy standard help?

21.07.2011 by Martin Kuppinger

Data Sprawl appears to me to be one of the biggest challenges in information security. And, by the way, Data Sprawl is not an issue that is specific to Cloud Computing. It is a problem organizations are facing day by day.

What happens when data is extracted from a SAP system? One example: a CSV (flat) file is created with some data from the HR system. This file is delivered to another system, in best case using some secure file transfer. But what happens then? That other systems processes the file in some way or another. It might export some or all of the data, which then ends up in yet another system. And so on…

The point is: Once data leaves a system, data is out of control.

The problem is that this might happen not only with one CSV file but with 100′s of them. And dozens of systems exporting and importing that data. Governance is difficult to implement. You can define a process for allowing exports. You might defined even rules for the use of exported data. You might review the exports regularly – are they still needed? However, reviewing what happens with the data at the target systems (are the rules enforced?) is pretty complex. But there is, up to now, no technical solution to solve that problem.

Things become even worse with Data Warehouse and Business Analytics. Data frequently ends up in large data stores and is analyzed. That means that data is combined, sometimes exported again, and so on. How do you keep control? Implementing Access and Data Governance for Business Analytics systems is a big challenge, and auditors frequently identify severe risks in that area – which is no surprise at all.

Another scenario is PII in the Internet. If we give some PII to some provider for some reason, how could we ensure that he doesn’t give that PII away? No way, I’d say. We might use special eMail addresses or faked information to track back some abuse of PII, but that’s not really a solution.

So what to do? Short term, it is about implementing processes which at least try to minimize Data Sprawl and the associated risk, like mentioned above. These processes and policies are far from perfect. That helps internally, but not for PII.

We might use (very) long-term technical solutions like homomorphic encryption and other technologies which are developed around the “minimal disclosure” approaches to address some of the issues. We then might use an approach like Information Rights Management which works not no a document basis but on an attribute basis. But most of these things will help us sometimes in the future, if ever.

But what about defining a policy standard which is sticky to the data? A standard which describes how data could be used? If systems support this standard, they could enforce it. That would be about having such a standard and allowing exports at least of sensitive data only to systems which support the standard and enforce the policies. If data is split up, the policy has to be sticky to all parts (as long as it applies to all parts). If data is combined, policies have to be combined – the intersection of the policies applies then.

Such an approach has limitations, because it will first of all need some people to define the standard. And, like with all standards, it is about the critical mass. On the other hand: Virtually every organization has the problem of Data Sprawl and lacks a valid answer to the questions which are asked in the context of Data and Access Governance. Thus, there is a real need for such a standard. From my perspective, the large vendors in the markets of Business Applications (e.g. ERP, CRM, and related systems), of Business Analytics, and of all the ETL and EAI applications are the ones who should work on such a standard, because they are the ones who have to support it in their systems. And they should start quickly, because their customers are increasingly under pressure from the auditors.


Be prepared for BYOD

06.06.2011 by Martin Kuppinger

BYOD: Again one of these acronyms. It stands for “Bring Your Own Device”. You’d also say that it stands for IT departments accepting that they’ve lost against their users. They have lost the discussion about which devices shall be allowed in corporate environments. When I travel by train, I observe an impressive number of different devices being used. There are Windows notebooks, netbooks, iPads, iBooks, other types of “pads”, smartphones,…

For a long time corporate IT departments have tried to limit the number of devices to a small list, thus being able to manage and secure them. However, the reality especially in the world of mobile devices proves that most IT departments have failed. For sure many have restricted the access to corporate eMail to Blackberry devices. But many haven’t managed to achieve that target. And the popularity of Apple devices increases the heterogenity of devices being used by employees.

It increasingly looks like the solution only can be acceptance. Accept, that users want to use different types of devices. Accept that the innovation especially around smartphones and pads is far quicker than corporate IT departments can adopt their management tools.

At first glance that sounds like a nightmare for corporate IT departments. How to manage these devices? How to secure the devices? However, it is not about managing or securing the devices. That would be “technology security”. It is about managing and securing information, e.g. “information security”. It’s about the I in IT, not the T. Thus, we have to look at when to allow access to which information using which tool.

To do this, a simple matrix might be the starting point. The first column contains the classes of devices – notably not every single device. The first row contains the applications and information being used. In the cells you can define the requirements, based on the risk score of both the devices and the information. In some cases you might allow access based on secure browser connections, in others you might require to use virtual desktop connections. In others you might end up with having to build a specialized app. However, if banks are able to secure online banking on smartphones, why shouldn’t you be able to secure your corporate information on these devices?

You might argue that building apps or deploying desktop virtualization is quite expensive. However, trying to manage all these different devices or trying to restrict the devices allowed is expensive as well – and much more likely to fail. I don’t say that it is easy to protect your corporate information in a heterogeneous environment, supporting BYOD. But it is much more likely to be feasible than to manage and secure any single device – given the increasing number of these devices, the speed of innovation, and the simple fact that corporations don’t own all these devices.

Thus it is about preparing for BYOD by providing a set of secure paths to access corporate information and to protect that information – and by understanding how to protect which information where. When you start with BYOD, do it risk-based.


Services
© 2014 Martin Kuppinger, KuppingerCole