From preventive to detective and corrective IAM

08.08.2014 by Martin Kuppinger

Controls in security and GRC (Governance, Risk Management, and Compliance) systems are commonly structured in preventive, detective, and reactive controls. When we look at IAM/IAG (Identity and Access Management/Governance), we can observe a journey from the initial focus on preventive controls towards increasingly advanced detective and corrective controls.

Initially IAM started with a preventive focus. This is done by managing users and access controls in target systems. Setting these entitlement rights prevents users from performing activities they should not perform. Unfortunately, this rarely works perfectly. A common example is access entitlements that are granted but not revoked.

With the introduction of Access Governance capabilities, some forms of detective controls were introduced. Access recertification focuses on detecting incorrect entitlements. The initial “access warehouse” concept as well as various reports also provided insight into these details. Today’s more advanced Access Intelligence and Access Risk Management solutions also focus on detecting issues.

Some vendors have already added integration with User Activity Monitoring (e.g. CA Technologies), SIEM (e.g. NetIQ), or Threat Detection Systems (e.g. IBM, CyberArk). These approaches move detection from a deferred approach towards near-time or real-time detection. If unusual activity is detected, alerts can be raised.

The next logical step will be corrective IAM – an IAM that automatically reacts by changing the settings of preventive controls. Once unusual activity is detected, actions are put in place automatically. The challenge therein is obvious: how to avoid interrupting the business in the case of “false positives”? And how to react adequately on “false positives”, without over-reacting?

In fact, corrective IAM will require moving action plans that today are in drawers (best case) or just in the mind of some experts (worst case) into defined actions, configured in IAM systems.

However, with the tightening threat landscape, with the knowledge that the attacker already might be inside the system, and with the IAM covering not only employees and internal systems, but business partners, customers, and the Cloud, IAM has to become far more responsive. IAM needs to become not only “real-time detective”, but also needs to have corrective controls put in place. This will be the next logical step in the evolution of IAM, which started way back with preventive controls.


Can EU customers rely on US Cloud Providers?

05.08.2014 by Martin Kuppinger

The recent US court decision has added to the concerns of EU customers (and of other regions such as APAC) regarding the use of Cloud services from US-based providers. The decision orders Microsoft to turn over a customer’s emails stored in Ireland to the US government. The decision required the company to hand over any data it controlled, regardless of where it was stored.

While the judge has temporarily suspended the order from taking effect to allow Microsoft time to appeal to the 2nd US Circuit Court of Appeals, it remains, like the sword of Damocles, hanging atop of the US Cloud Service Providers (CSPs).

The decision further increases the uncertainty many customers feel regarding the Cloud, and is the latest since the Snowden revelations. So let’s look at the facts behind the FUD (fear, uncertainty, doubt).

In fact, the most important issue of the Cloud is control, not location. There have been critics against many of the current regulations focusing on the location instead of control. When appropriate security controls are in place, why should it make a difference whether data is stored in an EU datacenter or in an US datacenter? The location argument is somewhat invalid anyhow given the fact that data might be routed through other locations, based on how the IP protocol stack works. This caused the recent discussion about an EU Cloud.

However, if control is the better concept in these days of the Internet and the Cloud, the court decision has some logic. The alternative – it is about location, not about control – would in fact mean: A US criminal can hide data simply by storing it outside the US in the Cloud.

Notably, the recent US court decision (still subject to appeal) does not provide blanket access to data held. In this case it appears that the data is related to criminal activity. It is common in virtually all legislations, that data can be seized by law enforcement if they have suspicion that a crime has been committed.

However, there is a risk that your data could legally be seized by law enforcement in a non EU country (e.g. the US, Russia, etc.) on suspicion of an act that is not a crime in your country and which may not have been committed in the country wishing to seize it. There have been a number of contentious example of UK citizens being extradited to the US for these kinds of reason.

The differences in laws and legal system between various countries and court decisions, such as the recent one, do not make it easier for EU customers to trust non-EU Cloud Providers. In fact, uncertainty seems to increase, not decrease. Waiting for harmonization of legislation or trade agreements such as (TTIP Transatlantic Trade and Investment Partnership) is not an answer.

Organizations today are in a situation where on one hand business wants new types of IT services, some only available from the Cloud. On the other hand, there is this uncertainty about what can be done or not.

The only thing organizations can (and must) do is to manage this uncertainty in the same way as for other kinds of risks. Businesses are experienced in deciding which risks to take. This starts with a structured approach to Cloud Service Provider selection, involving not only IT but also procurement and legal. It includes legal advice to understand the concrete legal risks. It also includes analyzing the information sensitivity and information protection requirements. In this way, the specific risk of using individual Cloud Service Providers and different deployment models such as public or private Clouds can be analyzed. It transforms uncertainty into a good understanding of the risk being taken.

KuppingerCole’s research around Cloud Assurance and Information Stewardship and our Advisory Services, for instance, can help you with this.

Notably, the frequently quoted answer “let’s just rely on EU CSPs” oversimplifies the challenge. It needs real alternatives and pure play EU offerings. Both are rare. Many EU offerings are not feature-equal or are far more expensive; others are not pure play EU. The same applies for other regions, for sure. Yes, these services must be taken into consideration. But “EU is good, US is bad” is too simple when looking at all aspects. It is better to understand the real risks of both and choose the best way based on this – which might include on-premise IT. The basic answer to the question in the title simply is: “It depends.” The better answer is: “Understand the real risk.”

This article was originally published in the KuppingerCole Analysts’ View Newsletter.


IBM to acquire CrossIdeas – further expanding its IAM/IAG portfolio

31.07.2014 by Martin Kuppinger

A while ago I blogged about IBM being back as a leader in the IAM/IAG (Identity Access Management/Governance) market. Today the news that IBM is to acquire CrossIdeas, an Italian vendor in the Access Governance market, hit the wire.

CrossIdeas is a key player in Access Governance in its home market, but also had some recent success in other markets, both in Europe and the U.S. The company originally started in authorization and role management. Over time, CrossIdeas – formerly known as Engiweb Security before a management-buy-out – added further capabilities. At the center of their solution today is their activity-based approach on SoD (Segregation of Duties) which relies on activities within business processes to model SoD rules. This approach allows auditors and business departments creating and editing SoD rules without specific IT knowledge.

Aside of the strength in role mining/modeling and the SoD approach (which notably provides sophisticated support for SAP environments), CrossIdeas’ product IDEAS also provides a well thought-out approach on access risk analysis and management. Furthermore, there are standard capabilities for Access Governance such as Access Recertification.

Furthermore, IDEAS provides a standard integration with IBM Security Identity Manager, which has been deployed at customers before.

From IBM’s perspective, CrossIdeas and its IDEAS product add several important capabilities to the IBM portfolio. The strength in managing SoDs from a business perspective relying on business process knowledge is one of these. Access risk management is the other. Combined with the existing integration with IBM Security Identity Manager, IDEAS can provide immediate benefit to IBM. It fits well into IBM’s strategy on IAM/IAG, enhancing IBM’s offerings for “policy-based Identity and Access Analytics”.

From KuppingerCole’s perspective, IBM is further strengthening its position in the IAM/IAG market. Being “ready-to-use” based on the existing integration, we expect to see further integration at all levels – platform technology, user interfaces, etc. – into the IBM IAM/IAG portfolio quite soon.

My final paragraph of the other blog linked at the beginning has been:

I always appreciate strong competitors in a market – it helps drive innovation, which is good for the customers. The IBM investment in IAM is also a good indicator of the relevance of the market segment itself – IAM is one of the key elements for Information Security. IBM’s strategy also aligns well with my view that IAM is just one part of what you need for Information Security. Integration beyond the core IAM capabilities is needed. So, in light of IBM’s current news around IAM, I think it is worth having a closer look at them again.

Nothing to add to this.

 

Related KuppingerCole Research

Leadership Compass Access Governance

Executive View IBM Security QRadar

Leadership Compass Dynamic Authorization Management

Leadership Compass Identity Provisioning

Buyer’s Guide Access Governance and Identity Provisioning

Advisory Note Access Governance Architectures

Executive View IBM Security Access Manager for Enterprise Single Sign-On

Product Report CrossIdeas IDEAS


More questions IAM buyers should ask

03.07.2014 by Martin Kuppinger

Earlier this year, I published the Buyer’s Guide: Access Governance and Provisioning. That document provides condensed information about key selection criteria for Identity Provisioning and Access Governance products, while also posing questions that buyers should ask of vendors.

I focused on “top 10 non-functional selection criteria” or “top 10 questions to ask the vendors”. As always with such lists that focus on the top xx, some aspects are not covered. The feedback I got so far adds some interesting aspects.

One is localization, i.e. support for different languages, character sets, etc. Given that, in particular, Access Governance is a business user application, it must be localized. Thus, questions such as the following ones might be considered:

  • Which languages are supported by the end user interfaces? (maybe with a list of languages a buyer specifically needs)
  • Can further languages be added?
  • Is there support for double-byte characters in the user interface and the search capabilities?

The second are is reporting. This is not only about advanced “Identity/Access Analytics”, but also about basic reporting capabilities. Questions to ask here are, for instance:

  • How do you modify an existing report?
  • How do you implement a new one?
  • Do the reports support multiple languages? Can this be implemented?

Clearly, there are far more criteria to look at when doing a thorough product selection. That is why the Buyer’s Guide is only one part of KuppingerCole services. Leadership Compass documents help in identifying relevant vendors and their particular strengths. Other reports such as Product Reports and Executive Views dive into more detail. Our advisory services include IAM/IAG maturity analysis, i.e. understanding the maturity of the current state of your IAM/IAG program, but also support the selection of vendors, backed by comprehensive, fine-grained questionnaires for RFI (Request For Information) processes. Just talk with my colleagues at sales@kuppingercole.com if you need more than the Top 10 questions.


Extending your Active Directory to the Cloud

01.07.2014 by Martin Kuppinger

Most organizations have a Microsoft Active Directory in place. The Active Directory (or, in short, AD) builds the foundation of their on-premises infrastructure for managing users, performing their primary network authentication and authentication to AD-integrated applications such as Microsoft Exchange Server, and some network infrastructure services including client configuration management based on Group Policies. AD is a purpose-built directory service that is optimized for supporting these requirements. One of the specific capabilities are Group Policies – client management commonly is out-of-scope of directory services. Another example are the sophisticated replication features of AD. These are required to provide (amongst others) seamless authentication and load-balancing of authentication requests and user management.

This works well for the employees and the on-premise IT infrastructure. However, when it comes to external users, things becoming more challenging. While most organizations manage the “long term” externals – the ones who spend a lot of time on-premises, need access to internal IT systems and frequently even have a company e-mail address – in the Active Directory, organizations struggle with managing all the other externals such as employees of business partners with occasional access only to a selected application or customers.

The purpose-built AD is not targeted towards these use cases. On-boarding and off-boarding thousands of employees of an insurance broker or managing the local operators of an airline across the world are not the standard use cases for AD. And what about managing millions of customers that need access to some applications?

There are workarounds, but none of these workarounds is really convincing. These external users might be managed in a separate forest or in a separate domain within an existing forest. They might even be managed within an existing domain (particularly in ADs that follow a single-domain approach), but that makes security management pretty cumbersome. And we do not yet speak about some challenges such as schema changes for specific requirements or the replication issues caused by managing a multitude of users than just the employees in the Active Directory.

The common answer on these challenges is to set up another, separate directory service for external users or customers. Microsoft’s lightweight answer is AD LDS (Active Directory Lightweight Directory Services). Other vendors provide their LDAP (Lightweight Directory Access Protocol) directory servers to manage these users and authenticate them.

But there is another answer now: cloud-based User and Access Management as part of the emerging cloud IAM offerings. Several vendors deliver solutions that allow managing customers and external users in integration with the existing on-premise infrastructure. Microsoft’s own answer in that field is the Azure Active Directory, a cloud-based directory service that it is quite different from the traditional Active Directory. It supports flexible schemas, scales virtually unlimited (Microsoft Office 365 is based on it), and provides functionality that helps managing external users far better than the on-premise Active Directory can do – and potentially better than other on-premise directory services can do. With upcoming extensions, Microsoft will further add capabilities for managing external users.

There are challenges such as synchronizing and/or federating the existing users of AD and other directory services to Azure Active Directory (or other services in that field).

Nevertheless, there are new options now to extend the existing AD to the cloud and to serve new business demand of on-boarding, off-boarding, and managing business partners and customers – delivered by Microsoft and other players in the market. This creates a situation for organizations using AD in which they should start reviewing and rethinking their Active Directory strategy. There are various options for extending the on-premise AD to the cloud, and it is time for defining the future strategy around AD. That future, for most organizations, will be hybrid.

This article was originally published in the KuppingerCole Analysts’ View Newsletter.


Dynamic Authorization Management and ABAC: The journey is the reward

30.05.2014 by Martin Kuppinger

Chinese philosopher Confucius is said to be the originator of the saying “the journey is the reward”. What does it mean? In its historic meaning, it says that by moving forward people will benefit, even while they might not reach perfection. Applied to projects, it means that continuous improvements, new understandings and small successes over time are the reward – not the ideal end-state.

In IT, a project might never reach its desired end-state, at least not at enterprise scale. One example is what is commonly referred to as Dynamic Authorization Management (as a discipline) or ABAC – Attribute-based Access Control – (as a theoretical concept). Organizations might succeed in a particular project on Dynamic Authorization Management, but they will rarely manage transforming their entire Identity and Access Management in such a way that every single authorization decision is made dynamically, using a central authorization system and relying on one or more attributes (i.e. attribute-based).

There is no doubt that Dynamic Authorization Management is the better way for authorizing access to information and systems, compared to statically assigned entitlements at the system-level or the lack of a valid, fine-grained authorization concept. Relying on centrally managed policies provides many benefits: consistency of authorization policies, always up-to-date policies, and reduced administrative efforts, to name just a few. Another important point is that Dynamic Authorization Management allows making authorization decisions in the context of the user, if integrated with versatile, risk- and context-based authentication.

While the discussion about RBAC (Role-based Access Control) versus ABAC (Attribute-based Access Control) is somewhat artificial and theoretical, moving towards Dynamic Authorization Management is a must for mature IAM/IAG infrastructures. There are too many advantages. Notably, Dynamic Authorization Management is not new. Some of today’s products came to the market back in the 1990’s. In mainframe infrastructures, Dynamic Authorization Management even dates back to the 1970’s.

However, there are four challenges:

  • Existing applications
  • Software architects and developers
  • Providers of Commercial off-the-shelf (COTS) software
  • Cloud Service Provider (CSPs) and standards bodies

Most existing applications do not support the externalization of authorization decisions to a Dynamic Authorization Management system. Changing such applications is at best expensive and cumbersome, but for many applications this is just impossible.

Software architects and developers might be hard to convince to change the way they implement security (or what they believe is security). Despite the fact that IAM/IAG and software development commonly are separate siloes in IT organizations, this is the challenge that is easiest to solve. Explain the need and provide simple interfaces to the Dynamic Authorization Management system that make the developer’s life easier, not more complex, and you will succeed.

For providers of COTS software, things are more difficult. They rarely support standards such as XACML (Extensible Access Control Markup Language) to interface with Dynamic Authorization Management systems. Even while you might have a well-working gate from procurement to Information Security, that does not help unless the COTS software provides the required interfaces.

Things become even worse with the Cloud. There is just no adequate authorization standard for the Cloud yet. Given the fact that a very significant portion of Cloud services still lacks support for basic standards such as SAML (Security Assertion Markup Language), this is no surprise. This will change, but it will take a while.

There are some workarounds such as applying Dynamic Authorization Management at the level of XML Gateways, API Gateways, or Web Access Management solutions. However, there will remain many applications which just can’t be moved to Dynamic Authorization Management within a foreseeable period of time.

Despite these challenges, Dynamic Authorization Management is a must for every organization in maturing their IAM/IAG infrastructure and improving Information Security. Thus it is latest time for evaluating these concepts and starting to use them.

But even then, Dynamic Authorization Management must be considered as a long journey, where every single application on-boarded is considered a reward.


How to identify attacks? Know your enemies – and what they already might do.

26.05.2014 by Martin Kuppinger

In a panel discussion I had at EIC 2014 with Roy Adar, Vice President of Product Management at CyberArk, Roy brought up an interesting number: according to research, attacks start on average 200 days before they are detected. Taking into account the Gaussian distribution behind this average, some attackers might have been active for years before they were detected. And who knows whether all of them are detected at all.

How to react to this? There are several elements in the answer. Protect your systems with various layers of security. Use anti-malware tools, even while they won’t catch every malware and every attacker. Encrypt your sensitive information. Educate your employees. These and other “standard” actions are quite common. But there is at least one other thing you should do: analyze the behavior of users in your network.

I do not mean user tracking in the sense of “do they do their job” (which is hard to implement in countries with strong worker councils), I’m talking about identifying anomalies in their behavior. Attackers are characterized by uncommon behavior. Users might access far more documents than average or than they did before. Accounts might be used at unusual times. Users might log in from suspicious locations. Sometimes, it is not a single incident, but a combination of things, eventually over a longer period of time, which is typical for a specific form of attack, especially in the case of long-running APTs (Advanced Persistent Threats).

There is an increasing number of technologies available to analyze such patterns. Standard SIEM (Security Information and Event Management) tools are one approach, however analysis of anomalies might be difficult to perform based on rules. However, there is an increasing number of solutions that rely on more advanced pattern-matching technologies. These can, based on specific mathematical algorithms, turn log events and other information into patterns (in fact complex matrices), and analyze these for anomalies. There might be some noise in the sense of false negatives in the results, but this is true for rule-based analytics as well. Combination of such analytical technologies can make a lot of sense – if you bring together specialized analytics for areas such as Privilege Management (for instance, CyberArk’s PTA), User Activity Monitoring, pattern-based analytics, and traditional SIEM, you might learn a lot about these anomalies and hence about the attacks that are already running and the attackers behind them.

From our perspective, all this is converging into a new discipline we call Real-Time Security Intelligence (RSI). There is a new report out on that topic. I also recently wrote another post on RSI.

Even while you might feel it being too early to move towards RSI, you should put your focus on how to learn more about the attackers that are already inside your network. Understanding anomalies and patterns with new types of analytical technologies might help.


The Future of Corporate IT

06.05.2014 by Martin Kuppinger

When looking at today’s IT, it is driven by some major evolutions. Everything which is done in IT has to take these evolutions into account. One is Social Computing. The second evolution is Mobile Computing. The third evolution is Cloud Computing. All these trends affect IT fundamentally. The consumerization and deperimeterization of IT are logical consequences. Information technology (IT) is available to virtually everyone and virtually everywhere.

When looking at the Future of IT Organizations, Cloud Computing has the biggest impact. With the rise of Cloud Computing, IT managers and Business started to feel that the internal, on-premise IT needs to be able to compete against attractive external offerings. The IT Supply Chain is changing fundamentally. There are far more suppliers within reach. This evolution is neither new nor surprising. It is just that IT overall is moving from manufacturing to industrialization. For IT Organizations that means they either have to adapt to that new age of “industrialized IT” or they will fail.

diagram

The Future IT Paradigm by KuppingerCole, a standardized model for building your future IT, provides the guideline for organizations to move their IT Organization and IT Infrastructure to the next level and to make it future-proof. It helps in fulfilling the major business requirements:

  • Provide the services that business really needs – agile, just in-time, cost-effective, and in the way business really needs them
  • Enforce Information Security and protect the sensitive business information and intellectual property of the organization
  • Mitigate your IT risks, stay compliant, and enforce an enterprise-wide Governance approach

Looking at the Future IT Paradigm by KuppingerCole, it becomes obvious that the key of the new IT Organization is the segmentation of IT according to the layers defined in this model. But there is much more. It is also about creating new roles and responsibilities in the IT Organization. It helps IT Organizations in re-gaining leadership and making their on-premise IT production state-of-the-art again. The Future IT Paradigm by KuppingerCole consists of three layers – plus the Governance infrastructure and IT & Security Management.

Business Service Delivery focuses on providing exactly the services business needs, in the way business needs them, and on time. It is all about interfacing Business and IT. This is where Business/IT alignment moves from a buzzword towards reality.

Service & Information Management is what we also could name “Core IT”. This is where services are managed and where IT services are transformed into business services. This is also where IT Security is enforced. And it is the level where Information is managed.

IT Service Production is about producing services and providing them to the business. This layer supports all types of production environments, from on-premise to any type of clouds. These production units have to provide services in a standardized way. Best of all, they are themselves organized according to that three-layered structure, by understanding the output they provide as the business services for their customers, i.e. IT itself.

For a full view on that model, an in-depth description of what it means for the IT Organization and which structure, departments, and skills are required, have a look at the KuppingerCole Report #71,200 The Future of IT Organizations.


Real world face recognition and where paper beats the smartphone

01.04.2014 by Martin Kuppinger

A few days ago, I was I was travelling in a local train, together with a business partner, from my office in Germany to an event in another city. We both learned a lot about the real world challenges of face recognition.

While I already had a 24-hour ticket for travelling in and around that city, the business partner needed to extend his. He used his smartphone and the app of the railway company to do so. So far, so good.

A few minutes later, a conductor arrived. Verifying my printed ticket was a matter of seconds. Verifying the online ticket turned out to be far more complex. First, the conductor needed to scan the QR code of the online ticket displayed on the smartphone of my business partner. He did so using his own smartphone. It did not work with the original size, so he requested the business partner to enlarge the QR code display. Eventually that worked.

However, there was the need for a second factor, so to speak, to ensure that this was really a personal ticket of my business partner. The conductor ‘s app provided the name of the person with the ticket plus the detail that he was using a discounted pass, valid for one year. The business partner showed the annual pass, with the number and a photo of himself printed on the front page. It turned out that this was not sufficient – the face recognition simply failed.

My business partner had to take the discount card out of his wallet, display the backside with his name printed on it, and finally the ticket was validated.

Overall, this took more than a minute. Face plus number of the discount card plus the possession of a smartphone with the valid ticket was not sufficient. In sum, this was cumbersome, inefficient, and costly for the railway company. Imagine what it costs when you need approximately 10 times as long for verifying tickets. Either you check fewer tickets or you need more conductors. Both cost, either lost revenue for more people travelling without valid tickets or higher expenses for employing more conductors.

While the face recognition issue was new to me (but funny for the two identity people travelling), the other aspect is very worthy of consideration, because it appears to be a common challenge. I have observed this in other countries as well, where it takes far longer to verify online tickets than it takes to verify paper tickets. Maybe it is sometimes worthwhile to look at the real costs, before the “modern” (but less than perfect) online solution is put into place. Not that I am against online tickets etc. – but I definitely would prefer more efficient solutions. Another post on this topic is here.


Real-time Security Intelligence – more than just “next generation SIEM”

14.03.2014 by Martin Kuppinger

Recently  a spotlight has been shed on the need for investing in Information Security solutions. The increase in cyber-attacks, the consistently high level of internal challenges, the appearance of more sophisticated types of long-running attacks (sometimes called Advanced Persistent Threats or APTs), the concerns regarding cyber-security following the Snowden revelations, the permanent challenge of dealing with Zero Day attacks leaving no time between becoming public and attacks happening: All this has led to an understanding for the need of better solutions.

Organizations have to assume that the attacker is already in their network. Every organization and every user is a potential target for attackers. On the other hand, with the increasing sophistication of attacks, it is becoming more difficult to identify the attackers. Finally, there is no such thing as the single perimeter anymore where organizations can place their security systems to prevent external attackers from entering the network. They might already have found their way via mobile devices, they might attack cloud services, etc. Complexity is increasing.

We see a new category of solutions evolving in the market that promise to help customers better solve these challenges. First, though, let’s look at current solutions which are not sufficient.

Standard IDS/IPS (Intrusion Prevention/Detection Systems) in their concept as edge devices are obviously limited when there is no such well-defined perimeter. They also are limited when it comes to complex attack scenarios, involving a number of systems.

SIEM (Security Information and Event Management) is still, typically, a tool-driven approach that requires heavy customization. Unless you are able to configure these systems correctly, they will not deliver on your expectations in the setup of, for example,  an SOC (Security Operations Centre). When it comes to taking more and more real-time information into account for the analysis, they might show limitations regarding their scalability.

Next Generation Firewalls again are an edge device, suffering from the conceptual limitations of such devices.

Services providing real-time security information  - regarding newly detected zero day attacks, for instance – deliver valuable information, but they don’t fix the problem. Furthermore, they do not provide the analysis of what is happening in the internal infrastructure.

Recently, though, we have observed a growing number of vendors moving towards integrated methods for Real-time Security Intelligence, combining various technologies and services:

  • Big Data analytics, enabling the analysis of large amounts of data, based on both rules and patterns;
  • Support for both real-time analytics and historical analysis, which can facilitate identifying new events as being related to those that occurred sometime in the past;
  • Integration to existing sources of information, including SIEM tools;
  • Integration with real-time security information services that provide up-to-date information about newly detected security challenges;
  • Services that provide automatic updated rules and patterns for analytics, i.e. configurations that reduce the need for customers to manually keep the configuration of the Real-time Security Intelligence systems up-to-date;
  • Services that support customers with analytics, i.e. expert services supporting the customer’s SOC;
  • Integration with IT GRC solutions, translating the identified challenges into risk information visible in dashboards for IT and business people.

Real-time Security Intelligence will become a mix of services and software. It will combine various offerings that exist today, but are separate from each other. It will allow customers to get a better insight into what already is happening in their networks and what currently is going on. Some vendors even provide the capability of changing network configurations, based on their analytical services.

We expect to see rapid evolution in this area, with further services to be added. A strong potential is in integrating network configuration management systems with Real-time Security Intelligence, allowing firewall settings, for example, to be changed on the fly. Another example is integration with SDCI (Software Defined Computing Infrastructures) to adapt the configuration of networks, storage, and virtual machines when new security challenges are identified, to automatically and dynamically minimize the attack surface.

This evolution towards Real-time Security Intelligence that we observe as of now, has some vendors focusing more on Big Data security analytics while others put more emphasis on online services, but this is just scratching the surface. There will be fundamental changes in the way we do security and we run SOCs, going well beyond just being “Next Generation SIEM”.

Learn more about Real-time Security Intelligence and how to successfully deal with your cyber security challenges at the upcoming EIC 2014. And don’t miss our upcoming webinar on “Mitigate targeted attacks with privileged account analytics” – not about Realtime Security Analytics primarily, but about one approach on mitigating the risks of becoming a victim of targeted attacks.


Services
© 2014 Martin Kuppinger, KuppingerCole