More questions IAM buyers should ask

03.07.2014 by Martin Kuppinger

Earlier this year, I published the Buyer’s Guide: Access Governance and Provisioning. That document provides condensed information about key selection criteria for Identity Provisioning and Access Governance products, while also posing questions that buyers should ask of vendors.

I focused on “top 10 non-functional selection criteria” or “top 10 questions to ask the vendors”. As always with such lists that focus on the top xx, some aspects are not covered. The feedback I got so far adds some interesting aspects.

One is localization, i.e. support for different languages, character sets, etc. Given that, in particular, Access Governance is a business user application, it must be localized. Thus, questions such as the following ones might be considered:

  • Which languages are supported by the end user interfaces? (maybe with a list of languages a buyer specifically needs)
  • Can further languages be added?
  • Is there support for double-byte characters in the user interface and the search capabilities?

The second are is reporting. This is not only about advanced “Identity/Access Analytics”, but also about basic reporting capabilities. Questions to ask here are, for instance:

  • How do you modify an existing report?
  • How do you implement a new one?
  • Do the reports support multiple languages? Can this be implemented?

Clearly, there are far more criteria to look at when doing a thorough product selection. That is why the Buyer’s Guide is only one part of KuppingerCole services. Leadership Compass documents help in identifying relevant vendors and their particular strengths. Other reports such as Product Reports and Executive Views dive into more detail. Our advisory services include IAM/IAG maturity analysis, i.e. understanding the maturity of the current state of your IAM/IAG program, but also support the selection of vendors, backed by comprehensive, fine-grained questionnaires for RFI (Request For Information) processes. Just talk with my colleagues at sales@kuppingercole.com if you need more than the Top 10 questions.


Extending your Active Directory to the Cloud

01.07.2014 by Martin Kuppinger

Most organizations have a Microsoft Active Directory in place. The Active Directory (or, in short, AD) builds the foundation of their on-premises infrastructure for managing users, performing their primary network authentication and authentication to AD-integrated applications such as Microsoft Exchange Server, and some network infrastructure services including client configuration management based on Group Policies. AD is a purpose-built directory service that is optimized for supporting these requirements. One of the specific capabilities are Group Policies – client management commonly is out-of-scope of directory services. Another example are the sophisticated replication features of AD. These are required to provide (amongst others) seamless authentication and load-balancing of authentication requests and user management.

This works well for the employees and the on-premise IT infrastructure. However, when it comes to external users, things becoming more challenging. While most organizations manage the “long term” externals – the ones who spend a lot of time on-premises, need access to internal IT systems and frequently even have a company e-mail address – in the Active Directory, organizations struggle with managing all the other externals such as employees of business partners with occasional access only to a selected application or customers.

The purpose-built AD is not targeted towards these use cases. On-boarding and off-boarding thousands of employees of an insurance broker or managing the local operators of an airline across the world are not the standard use cases for AD. And what about managing millions of customers that need access to some applications?

There are workarounds, but none of these workarounds is really convincing. These external users might be managed in a separate forest or in a separate domain within an existing forest. They might even be managed within an existing domain (particularly in ADs that follow a single-domain approach), but that makes security management pretty cumbersome. And we do not yet speak about some challenges such as schema changes for specific requirements or the replication issues caused by managing a multitude of users than just the employees in the Active Directory.

The common answer on these challenges is to set up another, separate directory service for external users or customers. Microsoft’s lightweight answer is AD LDS (Active Directory Lightweight Directory Services). Other vendors provide their LDAP (Lightweight Directory Access Protocol) directory servers to manage these users and authenticate them.

But there is another answer now: cloud-based User and Access Management as part of the emerging cloud IAM offerings. Several vendors deliver solutions that allow managing customers and external users in integration with the existing on-premise infrastructure. Microsoft’s own answer in that field is the Azure Active Directory, a cloud-based directory service that it is quite different from the traditional Active Directory. It supports flexible schemas, scales virtually unlimited (Microsoft Office 365 is based on it), and provides functionality that helps managing external users far better than the on-premise Active Directory can do – and potentially better than other on-premise directory services can do. With upcoming extensions, Microsoft will further add capabilities for managing external users.

There are challenges such as synchronizing and/or federating the existing users of AD and other directory services to Azure Active Directory (or other services in that field).

Nevertheless, there are new options now to extend the existing AD to the cloud and to serve new business demand of on-boarding, off-boarding, and managing business partners and customers – delivered by Microsoft and other players in the market. This creates a situation for organizations using AD in which they should start reviewing and rethinking their Active Directory strategy. There are various options for extending the on-premise AD to the cloud, and it is time for defining the future strategy around AD. That future, for most organizations, will be hybrid.

This article was originally published in the KuppingerCole Analysts’ View Newsletter.


Dynamic Authorization Management and ABAC: The journey is the reward

30.05.2014 by Martin Kuppinger

Chinese philosopher Confucius is said to be the originator of the saying “the journey is the reward”. What does it mean? In its historic meaning, it says that by moving forward people will benefit, even while they might not reach perfection. Applied to projects, it means that continuous improvements, new understandings and small successes over time are the reward – not the ideal end-state.

In IT, a project might never reach its desired end-state, at least not at enterprise scale. One example is what is commonly referred to as Dynamic Authorization Management (as a discipline) or ABAC – Attribute-based Access Control – (as a theoretical concept). Organizations might succeed in a particular project on Dynamic Authorization Management, but they will rarely manage transforming their entire Identity and Access Management in such a way that every single authorization decision is made dynamically, using a central authorization system and relying on one or more attributes (i.e. attribute-based).

There is no doubt that Dynamic Authorization Management is the better way for authorizing access to information and systems, compared to statically assigned entitlements at the system-level or the lack of a valid, fine-grained authorization concept. Relying on centrally managed policies provides many benefits: consistency of authorization policies, always up-to-date policies, and reduced administrative efforts, to name just a few. Another important point is that Dynamic Authorization Management allows making authorization decisions in the context of the user, if integrated with versatile, risk- and context-based authentication.

While the discussion about RBAC (Role-based Access Control) versus ABAC (Attribute-based Access Control) is somewhat artificial and theoretical, moving towards Dynamic Authorization Management is a must for mature IAM/IAG infrastructures. There are too many advantages. Notably, Dynamic Authorization Management is not new. Some of today’s products came to the market back in the 1990’s. In mainframe infrastructures, Dynamic Authorization Management even dates back to the 1970’s.

However, there are four challenges:

  • Existing applications
  • Software architects and developers
  • Providers of Commercial off-the-shelf (COTS) software
  • Cloud Service Provider (CSPs) and standards bodies

Most existing applications do not support the externalization of authorization decisions to a Dynamic Authorization Management system. Changing such applications is at best expensive and cumbersome, but for many applications this is just impossible.

Software architects and developers might be hard to convince to change the way they implement security (or what they believe is security). Despite the fact that IAM/IAG and software development commonly are separate siloes in IT organizations, this is the challenge that is easiest to solve. Explain the need and provide simple interfaces to the Dynamic Authorization Management system that make the developer’s life easier, not more complex, and you will succeed.

For providers of COTS software, things are more difficult. They rarely support standards such as XACML (Extensible Access Control Markup Language) to interface with Dynamic Authorization Management systems. Even while you might have a well-working gate from procurement to Information Security, that does not help unless the COTS software provides the required interfaces.

Things become even worse with the Cloud. There is just no adequate authorization standard for the Cloud yet. Given the fact that a very significant portion of Cloud services still lacks support for basic standards such as SAML (Security Assertion Markup Language), this is no surprise. This will change, but it will take a while.

There are some workarounds such as applying Dynamic Authorization Management at the level of XML Gateways, API Gateways, or Web Access Management solutions. However, there will remain many applications which just can’t be moved to Dynamic Authorization Management within a foreseeable period of time.

Despite these challenges, Dynamic Authorization Management is a must for every organization in maturing their IAM/IAG infrastructure and improving Information Security. Thus it is latest time for evaluating these concepts and starting to use them.

But even then, Dynamic Authorization Management must be considered as a long journey, where every single application on-boarded is considered a reward.


How to identify attacks? Know your enemies – and what they already might do.

26.05.2014 by Martin Kuppinger

In a panel discussion I had at EIC 2014 with Roy Adar, Vice President of Product Management at CyberArk, Roy brought up an interesting number: according to research, attacks start on average 200 days before they are detected. Taking into account the Gaussian distribution behind this average, some attackers might have been active for years before they were detected. And who knows whether all of them are detected at all.

How to react to this? There are several elements in the answer. Protect your systems with various layers of security. Use anti-malware tools, even while they won’t catch every malware and every attacker. Encrypt your sensitive information. Educate your employees. These and other “standard” actions are quite common. But there is at least one other thing you should do: analyze the behavior of users in your network.

I do not mean user tracking in the sense of “do they do their job” (which is hard to implement in countries with strong worker councils), I’m talking about identifying anomalies in their behavior. Attackers are characterized by uncommon behavior. Users might access far more documents than average or than they did before. Accounts might be used at unusual times. Users might log in from suspicious locations. Sometimes, it is not a single incident, but a combination of things, eventually over a longer period of time, which is typical for a specific form of attack, especially in the case of long-running APTs (Advanced Persistent Threats).

There is an increasing number of technologies available to analyze such patterns. Standard SIEM (Security Information and Event Management) tools are one approach, however analysis of anomalies might be difficult to perform based on rules. However, there is an increasing number of solutions that rely on more advanced pattern-matching technologies. These can, based on specific mathematical algorithms, turn log events and other information into patterns (in fact complex matrices), and analyze these for anomalies. There might be some noise in the sense of false negatives in the results, but this is true for rule-based analytics as well. Combination of such analytical technologies can make a lot of sense – if you bring together specialized analytics for areas such as Privilege Management (for instance, CyberArk’s PTA), User Activity Monitoring, pattern-based analytics, and traditional SIEM, you might learn a lot about these anomalies and hence about the attacks that are already running and the attackers behind them.

From our perspective, all this is converging into a new discipline we call Real-Time Security Intelligence (RSI). There is a new report out on that topic. I also recently wrote another post on RSI.

Even while you might feel it being too early to move towards RSI, you should put your focus on how to learn more about the attackers that are already inside your network. Understanding anomalies and patterns with new types of analytical technologies might help.


The Future of Corporate IT

06.05.2014 by Martin Kuppinger

When looking at today’s IT, it is driven by some major evolutions. Everything which is done in IT has to take these evolutions into account. One is Social Computing. The second evolution is Mobile Computing. The third evolution is Cloud Computing. All these trends affect IT fundamentally. The consumerization and deperimeterization of IT are logical consequences. Information technology (IT) is available to virtually everyone and virtually everywhere.

When looking at the Future of IT Organizations, Cloud Computing has the biggest impact. With the rise of Cloud Computing, IT managers and Business started to feel that the internal, on-premise IT needs to be able to compete against attractive external offerings. The IT Supply Chain is changing fundamentally. There are far more suppliers within reach. This evolution is neither new nor surprising. It is just that IT overall is moving from manufacturing to industrialization. For IT Organizations that means they either have to adapt to that new age of “industrialized IT” or they will fail.

diagram

The Future IT Paradigm by KuppingerCole, a standardized model for building your future IT, provides the guideline for organizations to move their IT Organization and IT Infrastructure to the next level and to make it future-proof. It helps in fulfilling the major business requirements:

  • Provide the services that business really needs – agile, just in-time, cost-effective, and in the way business really needs them
  • Enforce Information Security and protect the sensitive business information and intellectual property of the organization
  • Mitigate your IT risks, stay compliant, and enforce an enterprise-wide Governance approach

Looking at the Future IT Paradigm by KuppingerCole, it becomes obvious that the key of the new IT Organization is the segmentation of IT according to the layers defined in this model. But there is much more. It is also about creating new roles and responsibilities in the IT Organization. It helps IT Organizations in re-gaining leadership and making their on-premise IT production state-of-the-art again. The Future IT Paradigm by KuppingerCole consists of three layers – plus the Governance infrastructure and IT & Security Management.

Business Service Delivery focuses on providing exactly the services business needs, in the way business needs them, and on time. It is all about interfacing Business and IT. This is where Business/IT alignment moves from a buzzword towards reality.

Service & Information Management is what we also could name “Core IT”. This is where services are managed and where IT services are transformed into business services. This is also where IT Security is enforced. And it is the level where Information is managed.

IT Service Production is about producing services and providing them to the business. This layer supports all types of production environments, from on-premise to any type of clouds. These production units have to provide services in a standardized way. Best of all, they are themselves organized according to that three-layered structure, by understanding the output they provide as the business services for their customers, i.e. IT itself.

For a full view on that model, an in-depth description of what it means for the IT Organization and which structure, departments, and skills are required, have a look at the KuppingerCole Report #71,200 The Future of IT Organizations.


Real world face recognition and where paper beats the smartphone

01.04.2014 by Martin Kuppinger

A few days ago, I was I was travelling in a local train, together with a business partner, from my office in Germany to an event in another city. We both learned a lot about the real world challenges of face recognition.

While I already had a 24-hour ticket for travelling in and around that city, the business partner needed to extend his. He used his smartphone and the app of the railway company to do so. So far, so good.

A few minutes later, a conductor arrived. Verifying my printed ticket was a matter of seconds. Verifying the online ticket turned out to be far more complex. First, the conductor needed to scan the QR code of the online ticket displayed on the smartphone of my business partner. He did so using his own smartphone. It did not work with the original size, so he requested the business partner to enlarge the QR code display. Eventually that worked.

However, there was the need for a second factor, so to speak, to ensure that this was really a personal ticket of my business partner. The conductor ‘s app provided the name of the person with the ticket plus the detail that he was using a discounted pass, valid for one year. The business partner showed the annual pass, with the number and a photo of himself printed on the front page. It turned out that this was not sufficient – the face recognition simply failed.

My business partner had to take the discount card out of his wallet, display the backside with his name printed on it, and finally the ticket was validated.

Overall, this took more than a minute. Face plus number of the discount card plus the possession of a smartphone with the valid ticket was not sufficient. In sum, this was cumbersome, inefficient, and costly for the railway company. Imagine what it costs when you need approximately 10 times as long for verifying tickets. Either you check fewer tickets or you need more conductors. Both cost, either lost revenue for more people travelling without valid tickets or higher expenses for employing more conductors.

While the face recognition issue was new to me (but funny for the two identity people travelling), the other aspect is very worthy of consideration, because it appears to be a common challenge. I have observed this in other countries as well, where it takes far longer to verify online tickets than it takes to verify paper tickets. Maybe it is sometimes worthwhile to look at the real costs, before the “modern” (but less than perfect) online solution is put into place. Not that I am against online tickets etc. – but I definitely would prefer more efficient solutions. Another post on this topic is here.


Real-time Security Intelligence – more than just “next generation SIEM”

14.03.2014 by Martin Kuppinger

Recently  a spotlight has been shed on the need for investing in Information Security solutions. The increase in cyber-attacks, the consistently high level of internal challenges, the appearance of more sophisticated types of long-running attacks (sometimes called Advanced Persistent Threats or APTs), the concerns regarding cyber-security following the Snowden revelations, the permanent challenge of dealing with Zero Day attacks leaving no time between becoming public and attacks happening: All this has led to an understanding for the need of better solutions.

Organizations have to assume that the attacker is already in their network. Every organization and every user is a potential target for attackers. On the other hand, with the increasing sophistication of attacks, it is becoming more difficult to identify the attackers. Finally, there is no such thing as the single perimeter anymore where organizations can place their security systems to prevent external attackers from entering the network. They might already have found their way via mobile devices, they might attack cloud services, etc. Complexity is increasing.

We see a new category of solutions evolving in the market that promise to help customers better solve these challenges. First, though, let’s look at current solutions which are not sufficient.

Standard IDS/IPS (Intrusion Prevention/Detection Systems) in their concept as edge devices are obviously limited when there is no such well-defined perimeter. They also are limited when it comes to complex attack scenarios, involving a number of systems.

SIEM (Security Information and Event Management) is still, typically, a tool-driven approach that requires heavy customization. Unless you are able to configure these systems correctly, they will not deliver on your expectations in the setup of, for example,  an SOC (Security Operations Centre). When it comes to taking more and more real-time information into account for the analysis, they might show limitations regarding their scalability.

Next Generation Firewalls again are an edge device, suffering from the conceptual limitations of such devices.

Services providing real-time security information  - regarding newly detected zero day attacks, for instance – deliver valuable information, but they don’t fix the problem. Furthermore, they do not provide the analysis of what is happening in the internal infrastructure.

Recently, though, we have observed a growing number of vendors moving towards integrated methods for Real-time Security Intelligence, combining various technologies and services:

  • Big Data analytics, enabling the analysis of large amounts of data, based on both rules and patterns;
  • Support for both real-time analytics and historical analysis, which can facilitate identifying new events as being related to those that occurred sometime in the past;
  • Integration to existing sources of information, including SIEM tools;
  • Integration with real-time security information services that provide up-to-date information about newly detected security challenges;
  • Services that provide automatic updated rules and patterns for analytics, i.e. configurations that reduce the need for customers to manually keep the configuration of the Real-time Security Intelligence systems up-to-date;
  • Services that support customers with analytics, i.e. expert services supporting the customer’s SOC;
  • Integration with IT GRC solutions, translating the identified challenges into risk information visible in dashboards for IT and business people.

Real-time Security Intelligence will become a mix of services and software. It will combine various offerings that exist today, but are separate from each other. It will allow customers to get a better insight into what already is happening in their networks and what currently is going on. Some vendors even provide the capability of changing network configurations, based on their analytical services.

We expect to see rapid evolution in this area, with further services to be added. A strong potential is in integrating network configuration management systems with Real-time Security Intelligence, allowing firewall settings, for example, to be changed on the fly. Another example is integration with SDCI (Software Defined Computing Infrastructures) to adapt the configuration of networks, storage, and virtual machines when new security challenges are identified, to automatically and dynamically minimize the attack surface.

This evolution towards Real-time Security Intelligence that we observe as of now, has some vendors focusing more on Big Data security analytics while others put more emphasis on online services, but this is just scratching the surface. There will be fundamental changes in the way we do security and we run SOCs, going well beyond just being “Next Generation SIEM”.

Learn more about Real-time Security Intelligence and how to successfully deal with your cyber security challenges at the upcoming EIC 2014. And don’t miss our upcoming webinar on “Mitigate targeted attacks with privileged account analytics” – not about Realtime Security Analytics primarily, but about one approach on mitigating the risks of becoming a victim of targeted attacks.


The end of the Social Login begins: FIDO Alliance, Samsung, and PayPal to redefine authentication

06.03.2014 by Martin Kuppinger

Recently, the FIDO Alliance announced that PayPal and Samsung were enabling consumer payments with fingerprint authentication on the new Samsung Galaxy S5. My valued colleague Dave Kearns and I have written various posts about the FIDO Alliance and the impact we expect they will have on the market of strong authentication and BYOI (Bring Your Own Identity). Have a look here, here, and here.

What first reads like one of these unexciting press releases I receive in huge quantities daily is in fact about a groundbreaking paradigm shift that will have massive impact on device vendors, strong authentication technology providers, and – last but not least – on social networks.

FIDO is all about enabling users to rely on one personal digital identity, their “own identity” in BYOI, to access various services. Not only that, it is also about enabling BYOI with strong authentication and, finally, getting rid of username and password authentication. While the Samsung/PayPal case is the first large use case for the FIDO Alliance, this is just the beginning. Looking at the long list of members of the FIDO Alliance, others will follow. Users then can access various services, relying on strong authentication and a locally managed identity on their smartphone. In addition, Samsung will not remain the only device vendor delivering FIDO-enabled devices.

Obviously, that will affect many markets. Strong authentication vendors, device vendors, services acting as Identity Providers, etc.

It especially will have a massive impact on social networks. A significant part of their attractiveness is that many of these have become an Identity Provider, supporting the “social login”. This is part of the business model of social networks – users are bound to the networks and the social networks learn about user behavior, which is at the core of their business model. However, there is a downside to that from a marketing perspective, as I recently explained. Aside from that, social logins commonly lack support for strong authentication.

When the FIDO Alliance success continues, the need for social logins – the most common way for BYOI –will disappear. Why should users rely on social logins when they have a more secure way to authenticate, built into their devices of choice? With the beginning of the end of social logins, an important part of the business model of social networks start to crumble away. And that is the real big news behind the recent announcement of FIDO Alliance.


The Mt. Gox Bitcoin disaster and the need for innovation in the finance industry

05.03.2014 by Martin Kuppinger

A few days ago, Tokyo-based Bitcoin exchange Mt. Gox appeared to be in trouble. When looking at their website Friday morning, I only found meaningless announcements. They are “working very hard to find a solution to our recent issues”. Looking at the situation realistically, chances are high that the owners of the Bitcoins have lost a significant part, if not all, of their money. Just a few hours later, the news spread that Mt. Gox has gone bankrupt. while it is still unclear what exactly happened and what will happen now with the Bitcoins and Mt. Gox, this sheds a light on the concept of Bitcoins. Bitcoins were claimed to be absolutely safe. However, when you cannot use them but instead lose your “money”, this obviously is not the case.

There are good reasons for having trusted parties in the Finance Industry. Despite all the turmoil that industry went through in recent years, but also back in the Big Recession and in other times, the concepts worked relatively well.

On the other hand, the initial success of the Bitcoin currency also demonstrated that there might be a need for other concepts, aside from traditional currencies and the way financial transactions are handled. Even while the concept of Bitcoin might have been the wrong answer, that discussion will continue. Aside from requiring a trustworthy provider and exchange infrastructure, there are other questions to answer. One is about security, with an increasing number of attacks. Overall, there is a strong trend towards crypto-currencies. We will see a lot of evolution, we most likely will see failures and disasters, but it is not likely that crypto-currencies disappear again.

It will be interesting to observe how the Finance Industry reacts to that pressure. While Bitcoins are the most prominent topic these days, PayPal and other new players in the mobile and online payment market probably are the bigger challenge to the Finance Industry. PayPal in fact is a specific new type of Financial Institution. PayPal is a bank that knows how to provide APIs and how to interact with other players. It knows how to support the supply chains. It knows how to find the balance of security and customer convenience.

On the other hand, Financial Institutions still are trusted, when it is about money. They know how to do security. The challenge is how to make the banking business fit for the changing landscape of the Computing Troika (of Cloud Computing, Mobile Computing, and Social Computing) and enable them to provide their proven services for a world of consumerized IT. It is about API-enabling that industry.

However, that is more of a technical perspective. In fact, it is about moving banking IT to a level that allows Finance Institutions to leverage their strengths while becoming agile enough to compete with new players in the market. There is a strong potential for trusted Financial Institutions to do so. However, that requires banks looking closely at API Management and Security, BYOI (Bring Your Own Identity), trust and privacy concepts such as the Life Management Platforms.

EIC 2014 will dive into this topic in the Finance Industry Roundtable on the “Future Model of Banking”. Discuss and learn how to enable business agility by doing the things right in IT.

I personally believe that classical financial institutions have a strong potential in the future Finance business, despite Bitcoins and other concepts. I also believe in regulations. There is a good reason for regulations in the Finance industry. Having such regulations in place might have avoided the situation Mt. Gox and Bitcoins are in today. That is where the established Finance Industry comes into play: Making crypto-currency more secure by providing professional services, complying with the regulations. Regulations will come for that field – and then, the Finance Industry has an advantage again, if it is agile enough to support these new models by then.

Clearly, you might argue that the main value of crypto-currency is not about having a regulated and safe method of payment but one that is not traceable. It was Silk Road that brought Bitcoin to prominence. The question is whether there is a need for crypto-currency aside from the dark side of the Internet. I think to, with crypto-currencies being the “cash” of the Internet. No transaction fees, no fees for exchanging into other currencies. There is a potential value in using that type of currencies.

However, regulation and anonymity do not necessarily exclude each other. Take the analog world as an example: cash money lets me buy whatever I like anonymously, but the place where I deposit my cash (bank) is regulated. Banks should try to be clever enough to provide the trust that is created by regulation to the crypto-currency world.


The new ABC: Agile businesses – connected

05.03.2014 by Martin Kuppinger

Agility is a key capability of successful organizations. Agility is the ability to quickly adapt the organization and the business model to new customer demands, innovations, and a changing competitive landscape. We live in a time where virtually all business relies on IT. Whether this is retail, finance, or life sciences – business requires IT. The consequence is, that IT has to support business agility. No IT agility = no business agility.

One of the biggest changes we are currently observing is the evolution from stand-alone to connected businesses. New collaborative business models, tighter and more flexible integration of customers and business partners, and the upcoming IoEE (Internet of Everything and Everyone) are driving the evolution of businesses. Cloud Computing, Mobile Computing, and Social Computing, the so-called “Computing Troika”, are already consequences of the business demand for agile and connected IT.

The challenge in this evolution is finding the balance between the business demand for agility and connectivity on the one hand and the IT and Information Security requirements on the other. Information Security can no longer think in terms of perimeters, devices, and system security. There is no closed perimeter anymore. Devices are under constant change. Systems might become Cloud services the next day.

The other part of the challenge is managing the users. Instead of focusing on the employees and a few business partners, there is a demand for rapid on-boarding and off-boarding of customers and business partners in changing business and collaboration models. And there is the need to on-board employees to business partner systems, to manage users in industry collaboration networks, and to manage user access to Cloud services.

Information Security in these days of the new ABC are primarily driven by two evolutions. First there is flexible user management that allows IT to manage the access of all types of users to all types of services – external users and internal users, on-premise IT and Cloud services. Having a (one!) user and access management infrastructure in place to support this change is a key success factor. This infrastructure commonly consists of a mix of on-premise and Cloud IAM.

The other fundamental shift is in what we protect. As the term “Information Security” implies, it is about securing information. In the new ABC, securing information is at the centre of attention. Technologies such as Information Rights Management allow for Secure Information Sharing.

IT that will succeed in supporting the business demand for agility and connectivity will have to move from traditional perimeter and device security towards information-centric approaches and a flexible user management for all types of users. Identity and Information are the new perimeters for security, not the firewall or the device. Rethink your IT and Information Security – get ready for the new ABC.

Sachar Paulus: Security Leadership in the Connected Enterprise


Services
© 2014 Martin Kuppinger, KuppingerCole