KuppingerCole Predictions and Recommendations 2014

19.12.2013 by Martin Kuppinger

On Monday this week, we have published the KuppingerCole Predictions and Recommendations for 2014. They differ from other publications of people looking into the crystal ball in one important aspect: we not only provide our predictions, but also recommendations. More on that below.

Information Security is in constant flux. With the changing threat landscape, as well as a steady stream of innovations, demand for Information Security solutions is both growing and re-focusing. Based on new offerings and changing demand, KuppingerCole predicts several major changes in the Information Security market. KuppingerCole specifically identified the following areas where we see massive change in 2014:

  • Software Defined Networking (SDN) – Software Defined Computing Infrastructures (SDCI)
  • Integrated Real-time Network Security Analytics
  • Cloud IAM (Identity and Access Management)
  • Digital, Smart Manufacturing & Smart Infrastructure: ICS & SCADA
  • API Economy
  • IoEE (Internet of Everything and Everyone)
  • BYOI (Bring Your Own Identity) and Biometric Authentication
  • Big Data
  • Cloud Service Provider Selection and Assurance
  • Ubiquitous Encryption

The document provides both predictions and recommendations. The latter focus on how organizations should react to the changes we predict. It is not always best to jump on every trend and hype – in many cases, it is about defining strategies first and performing organizational changes before starting to implement new types of technologies. Do not rely on predictions only.

Have a look at our document. The best place to learn more about these topics is the upcoming European Identity and Cloud Conference (EIC) 2014, Munich, May 13th to 16th. And don’t miss all our current and upcoming research on these topics.

SDN is NOT the next big thing. SDCI is.

13.12.2013 by Martin Kuppinger

I have read many predictions recently that SDN (Software Defined Networking) is the next big thing in IT. Wrong. It is not. It is just a small piece in a bigger story. And just looking at SDN is not sufficient.

The next big thing is SDCI – Software Defined Computing Infrastructure. This is about “software-defining” everything. Hardware virtualization – “software defining hardware”, so to speak – is a reality. Software Defined Storage is becoming increasingly popular. SDN is another element. A number of vendors, such as VMware, talk about a Software Defined Cloud Datacenter. I don’t like that term, because the “Cloud” element might be nice from a marketing perspective, but tends to narrow things down to a specific form of Computing Infrastructure. So I will use SDCI for now.

When looking at SDCI versus SDN, claiming that SDN is the next big thing is like inventing a locomotive but no rail infrastructure. It is only about solving a portion of the problem, from a technical, network-centric view.

However, SDCI is far more than that. It is about managing how business services are operated on a flexible computing infrastructure, which must include all elements of this infrastructure. It is about defining the policies for the entire infrastructure. This is an interesting challenge, because it is not about network, storage or other technical policies anymore, but about translating the business policies. Regulatory compliance, security requirements, availability, performance, but also the willingness of business to pay for a certain level of service – all that flows into policies that define how infrastructure is used and how to balance various requirements.

SDCI also will revolutionize security, in particular network security. In dynamic environments, there is no place for traditional firewalls anymore, but there are fantastic new opportunities for securing information. Such infrastructures allow us to manage security consistently across “machines”, storage, and network, in the context of business policies and in the context of identities. Instead of having multiple disparate approaches to security – a little bit of firewall here, a little bit of endpoint security here, some IAM there, etc. – we are heading towards integrated security. This integrated security still can be layered, but it will be layered in an integrated manner, unlike “layered” security today, which means using multiple levels of disparate security where no one really knows how good the combined result really is – just because there is no integration, no consistent view, no consistent policies.

The security aspect is another reason why SDNs for themselves are not what we need. SDNs are about continuing segregation. They allow us to repeat mistakes on a higher level. SDCI allows us to do things better. That’s the reason why SDCI is the next big thing – and it will become a real big thing.

Is there really no way to avoid shared user accounts?

04.12.2013 by Martin Kuppinger

In various discussions over the past month, mainly in the context of Privilege Management, I raised the (somewhat provocative) claim that shared accounts are a bad thing per se and that we must avoid these accounts. The counterargument  I got, though, was that sometimes it is just impossible to do so.

There were various examples. One is that users in production environments need a functional account to quickly access PCs and perform some tasks. Another is that such technical user accounts are required when building n-tier applications to, for instance, access databases. Administrators commonly tend to groan when approaches for avoiding the use of shared accounts such as root are considered.

There are many more examples, but when you look at reality there are sufficient examples and reasons of how it is possible to avoid shared accounts (or at least their use). In many healthcare environments, fast user switching has been used for years now. The strict regulations in this sector frequently have led to implementing Enterprise Single Sign-On tools that allow for rapid authentication and access to applications with an individual account. These solutions frequently have replaced previously used shared functional accounts. So why shouldn’t they work in other environments as well?

When looking at n-tier applications, it is worth it to dive somewhat deeper into end-to-end security. There are many ways to implement end-to-end security. Standards such as OAuth 2.0 make it far easier to implement such concepts. Provisioning tools have supported database systems and other systems for a number of years. Oracle has just “re-invented” database security in its Oracle Database 12c, with tight integration into IAM (Identity and Access Management). Aside from the argument that end-to-end security just does not work (which is wrong), I sometimes hear the argument that this is too complex to do. I don’t think so. It is different to do. It requires a well-thought-out Application Security Infrastructure, something I was writing about years ago. It requires changing the way software architecture and software development are done. But in many, many cases technical accounts are primarily used due to convenience reasons – architects and developers just do not want to consider alternative solutions. And then there always is the “killer argument” of time to market, which is not necessarily valid.

When I look at administrators, I know about many scenarios where root or Windows Administrator accounts are rarely used, except for firefighting operations. The administrators and operators instead rely on functionally restricted, personal accounts they use aside of their other personal accounts they use for standard operations such as eMail access. That works well and it does not hinder them from doing a good job in administration and operations. But it requires thoroughly thinking about the concept for these accounts.

So there are many good reasons to get rid of shared accounts, but few, if any, valid ones to continue using them. Given that these accounts are amongst the single biggest security risks, it is worth starting to rethink their use and openly consider alternative solutions. Privilege Management tools are just helping with the symptoms. It is time to start addressing the cause of this security risk.

Have a look at our KuppingerCole reports. We will publish a new Leadership Compass on Privilege Management soon. Given that shared accounts are a reality and will not disappear quickly, you might need a tool to better secure these. Have a look at the new report, which will help you selecting the right vendor for your challenges.

Security Advice for Industrial Control Systems

03.12.2013 by Martin Kuppinger

Last week, the German BSI (Bundesamt für Sicherheit in der Informationstechnik, the Federal Office for IT Security), published a document named “ICS-Security-Kompendium”. ICS stands for “Industrial Control Systems”. This is the first comprehensive advisory document published by the German BSI on this topic so far. The BSI puts specific emphasis on two facts:

  • ICS are widely used in critical infrastructures, e.g. utilities, transport, traffic control, etc.
  • ICS are increasingly connected – there is no “air gap” anymore for many of these systems

It is definitely worth having a look at the document, because it provides an in-depth analysis of security risks, best practices for securing such infrastructures, and a methodology for ICS audits. Furthermore it has a chapter on upcoming trends such as the impact of the IoT (Internet of Things) and the so-called “Industry 4.0” and of Cloud architectures in industrial environments. Industry 4.0 stands for the 4th industrial revolution, where factories are organizing themselves – the factory of the future.

As much as I appreciate such publication, it lacks – from my perspective – an additional view of two major areas that are tightly connected to ICS security:

  • Aside from the ICS systems, there is a lot more of IT in manufacturing environments that frequently is not in scope with the corporate IT Security and Information Security departments. Aside from attacks to such systems, for instance in the area of PLM/PDM (Product Lifecycle/Data Management), there are standard PCs that might serve as entry point for attacks.
  • This directly leads to the second aspect: It is not only about technical security, but about re-thinking the organizational approach to Information Security in all areas within an organization, i.e. a holistic view on all IT and information. Separating ICS and manufacturing IT from the “business IT” does not make sense.

The latter becomes clear when looking at new business cases such as the connected vehicle, smart metering, or simply remote control of HVAC (heating, ventilation, and air conditioning) and other systems in households (or industry). In all these scenarios, there are new business cases that lead to connecting both sides of IT.

Also have a look at our KuppingerCole research on these issues, such as the KuppingerCole report on critical infrastructures in finance industry (not about iCS) and the KuppingerCole report on managing risks to critical infrastructure.

IAM@IBM: Finally back to leadership

26.11.2013 by Martin Kuppinger

It has been somewhat quiet around IBM’s IAM offering for the past few years. Having been one of the first large vendors entering that market, other vendors had overhauled IBM, being more innovative and setting the pace in this still emerging market.

This seems to be over now and IBM is showing up amongst the IAM leaders again. Since IBM launched its IBM Security division as part of their software business and moved the IAM product from the Tivoli division into that new division, things have changed. The IBM Security division not only is responsible for the IAM products, but a number of other offerings such as the QRadar products.

IBM has defined an IAM strategy that brings together their capabilities in Security Intelligence – such as the IBM X-Force services and the QRadar products – with IAM. The core of IAM still is formed by familiar products (if you replace “Tivoli” with “Security”), such as the IBM Security Access Manager, the IBM Security Directory Integrator, the IBM Security Identity Manager, and others. However, IBM has put a lot of work in these products to improve them and to make them leading-edge (again, in some cases).

There have been four recent announcements. One is the IBM Security Access Manager for Mobile, an appliance that allows managing mobile access, provides SSO services and risk- and context-aware access, based on information such as the IP reputation – that is where, for instance, IBM X-Force comes into play.

IBM has also introduced their own Privilege Management solution, IBM Security Privileged Identity Manager, to manage shared accounts and add strong authentication. The interesting piece there is the tight integration with QRadar to analyze real-time activity of privileged identity use.

The third major announcement is what IBM calls the IBM Security Directory Server and Integrator. Here they bring together Directory Services and Identity Federation – plus QRadar integration. Integrating federation and directory services allows managing more identities, such as external users, as well as reaching out to Cloud services.

Finally, IBM has extended their IBM Security Identity Manager – the former Tivoli Identity Manager – and added advanced analytical capabilities as well as integration with QRadar security intelligence. The latter allows for better analysis of real-time attacks and fraud detection. While such integration is not entirely new, if you look for instance at NetIQ Identity Manager and Sentinel integration, it highlights the fact that IBM is moving forward with its IAM offerings rather quickly now, showing innovation in various areas and having a clear execution strategy.

I always appreciate strong competitors in a market – it helps drive innovation, which is good for the customers. The IBM investment in IAM is also a good indicator of the relevance of the market segment itself – IAM is one of the key elements for Information Security. IBM’s strategy also aligns well with my view, that IAM is just one part of what you need for Information Security. Integration beyond the core IAM capabilities is needed. So, in light of IBM’s current news around IAM, I think it is worth having a closer look at them again.

Secure Information Sharing – a lot of new momentum

22.11.2013 by Martin Kuppinger

During the last few months, we have seen – especially here in Europe – a massive increase in demand for methods to securely share information, beyond the Enterprise. The challenge is not new. I have blogged about this several times, for instance here and here.

While there have been offerings for Information Rights Management or Enterprise Rights Management for many years – from vendors such as Microsoft, Adobe, Documentum or Oracle, plus some smaller players such as Seclore – we are seeing  a lot of action on that front these days.

The most important one clearly is the general availability of Microsoft Azure RMS (Rights Management Services), with some new whitepapers available. I have blogged about this offering before, and this clearly is a game changer for the entire market not only of rights management, but the underlying challenge of Secure Information Sharing. Microsoft also has built an ecosystem of partners that provide additional capabilities, including vendors such as Watchful Software or Secude, the latter with a deep SAP integration to protect documents that are exported from SAP. And these are just two in a remarkably long list of partners that help Microsoft in making Azure RMS ready for the heterogeneous IT environments customers have today.

Aside of the Microsoft Azure RMS ecosystem, some other players are pushing solutions into the market that can work rather independently, somewhat more the way Seclore does. Two vendors to mention here are Nextlabs and Covertix. These are interesting options, especially (but not only) when there is a need for rapid, tactical solutions.

Other vendors that are worth a look in this market for Secure Information Sharing include Brainloop and Grau Data. Both are German vendors, but there are other solutions available in other countries and regions. These focus primarily on providing a space to exchange data, while the others mentioned above focus more on data flowing rather freely, by protecting these documents and their use “in motion” and “in use”.

The current momentum – and the current demand – are clear indicators for a fundamental shift we see in Information Security and for Information Stewardship. In fact, all these solutions focus on enabling information sharing and allow users to share information in a secure but controlled way. This is in stark contrast to the common approach within IAM (Identity and Access Management) and IAG (Identity and Access Governance), where the focus is on restricting access.

Secure Information Sharing enables sharing, while the common approaches restrict access to information on particular systems. So it is about enabling versus restricting, but also about an information-centric approach (protect information that is shared) versus a system-centric concept (restrict access to information that resides on particular systems).

With the number of solutions available today, from point solutions to a comprehensive platform with broad support for heterogeneous environments – Microsoft Azure RMS – there are sufficient options for organizations to move forward towards Secure Information Sharing and enabling business users to do their job while keeping Governance, Compliance, and Information Risks in mind. Regardless of the business case, there are solutions available now for Secure Information Sharing.

It is time now for organizations to define a strategy for Secure Information Sharing and to move beyond restricting access. More on this at EIC Munich 2014.

Auditing access to sensitive information in SAP systems

14.11.2013 by Martin Kuppinger

In a recent SAP Insider article, SAP unveiled some interesting news around security auditing and information protection. In SAP NetWeaver Application Server (AS) ABAP 7.40 they included a new functionality called Read Access Logging (RAL). The current version supports Web Dynpro ABAP, web service, and RFC calls. Support for ABAP Dynpro is planned for a later release. SAP also has announced availability for release 7.31 near-time and is planning further “downports” to earlier versions.

What does this feature provide? RAL allows you to log access to defined sensitive data in these systems, as well as to define which access shall be logged. The configuration of logging is rather flexible. Logs then can be searched and viewed to analyze access to the information that is monitored.

However, RAL does not support automated analysis of the collected information. The logical next step would be to automatically act on this data, by analyzing it and identifying signs of fraud. Given that SAP has technology to do that in place – just think about SAP HANA as a platform for such analytics and SAP Fraud Management as a solution that allows you to deal with fraud – this would help customers to really have a solution on hand.

Despite this gap – it’s not about logging, but about making use of log data – this is an interesting feature for Information Security and SAP Security and worth to evaluate in detail.

Identity Information Quality: Recertify the Identity

14.10.2013 by Martin Kuppinger

One of the challenges many organizations are facing in their IAM infrastructure is “Identity Information Quality”. That quality, especially in larger organizations, varies depending on the source it comes from. This challenge is not limited to the enrollment process, but also all subsequent processes. While the creation of new digital identities in IAM systems (at least for employees) is frequently driven primarily through imports from HR systems, changes of attribute values might be triggered from many different sources.

Many organizations spend a lot of time and money to improve HR processes to achieve a higher level of Identity Information Quality. That clearly makes sense, especially in the context of HR standardization initiatives. However, even the best processes will not deliver perfect Identity Information Quality.

So the question is: Why not use the recertification capabilities of Access Governance tools to improve Identity Information Quality? Why not let the departmental manager or the user themselves recertify certain attributes? This would be just another type of recertification campaign. Recertification in Access Governance is here because the Access Management processes are error-prone. If these processes worked perfectly well, no one would need recertification. The same is true for digital identities and their attributes, i.e. for Identity Information Quality.

When looking at other types of digital identities such as the ones of partners and customers, organizations might need other approaches to improve Identity Information Quality. When it is about partners, self-certification and recertification by the contact persons of the business partners might work. However, there is no need for that where Identity Federation is used – in that case, it is the responsibility of the business partner’s organization to enforce Identity Information Quality.

In the case of consumers, the option of self-certification – the option to review “account information” – might be one approach. Depending on the sales model, key account managers also might recertify their accounts. Furthermore, there is an increasing number of openly available information sources such as Facebook that under specific circumstances allow access via Graph APIs. These can be used to verify identity information.

But back to the employees: to me, it appears just logical to recertify the identity and not only the access information.

Mobile Security: Virtualization on the smartphone

10.10.2013 by Martin Kuppinger

LG recently announced a new platform called GATE that will enable some LG business smartphones to run two mobile operating systems in parallel. LG appears, with this feature, to be reacting to the security concerns many organizations have around BYOD (Bring Your Own Device). Virtualization is one of the smartest options for enhancing the security of mobile devices, as we discussed in the KuppingerCole Advisory Note “BYOD”.

By virtualizing the smartphones and providing two segregated environments, users can access both their business and their private environment, with the business apps operating in a segregated and more secure way in concert with the business backend systems.

I personally like that approach, because it focuses on making the smartphone smart enough for BYOD. Together with additional features such as built-in and improved MDM (Mobile Device Management) support and VPN integration, LG is raising the bar for enterprise ready smartphones.

However, there is one question LG has left open as of now: which types of strong authentication are supported for access to the smartphone, particularly the business virtual machine? Clearly, segregation makes a lot of sense. But without adequate strong authentication, there is still a security gap.

Overall, it is good to see smartphone vendors making significant progress in security. The bad thing about this is that they should have started with that security evolution years ago. But better late then never.

Azure Active Directory: What is the Graph API?

30.09.2013 by Martin Kuppinger

In Azure Active Directory (AAD) there is a Graph API. This is the main API to access AAD. The idea of a Graph API is not entirely new. The one provided by Facebook is already well established. But what is this really about and why does AAD provide such an API?

First of all, I neither like the term “Graph API” nor “API” itself very much. Both are, from my perspective, far too technical. They are fine for people with a good background in mathematics and computer science, but not for typical business people. A graph is a mathematical concept describing nodes and their connections. The structure of AAD can be understood as a graph. To navigate this graph, there is an API (Application Programming Interface) – the Graph API.

So the AAD Graph API is the interface for navigating the content of AAD (walking the tree, or, more correctly, the graph) and accessing (and creating and manipulating) the information stored therein. Developers can perform CRUD (Create, Read, Update, Delete) operations through REST (Representational State Transfer) API endpoints when developing applications such as web applications and mobile apps – as well as more conventional business processes.

It comes as no surprise then that the Graph API is REST-based. REST is the de facto standard for new types of APIs. It is rather simple to use, especially when compared with traditional methods for directory access such as the LDAP C API (yes, it always depends on what you compare something with…).

The Graph API of Azure AD provides a broad set of standard queries that can be used to retrieve metadata information about the tenant’s directory and its data structure, but also about users, groups, and other common entities. Apart from these standard queries, there are so-called differential queries that allow developers to request only the changes that have happened on the result set of the query since the previous query run. This is very interesting for applications that need to synchronize AAD and other data stores.

Access to the Graph API is done in two steps. The first one is the authentication (based on tenant-ID, client-ID and credentials), which is done against the Windows AAD authentication service. The authentication service returns a JWT Token. This token then can be used for running Graph API queries. The Graph API relies on an RBAC (Role Based Access Control) model. It authorizes every request and returns the result set if the authorization has been successful.

Overall, the Graph API is a simple yet powerful concept for accessing content of the AAD. It is the successor to traditional approaches for directory access such as LDAP with its rather complex structure (which is simplified by ADSI, ADO .NET, etc.). Being based on REST, it is a familiar approach for web developers. There is a lot of information already available at the MSDN (Microsoft Developer Network) website.

From the perspective of a non-developer, the most important thing to understand is that it is far easier than ever before to build applications that rely on the AD – or, more particularly, on the AAD. All the information about the employees, business partners, and customers that organizations may hold in the AAD in future is accessible through the Graph API for new types of applications, from integration of that information into business processes to simple mobile apps providing, for instance, customer information out of the AAD. This is done in a secure way, based on the built-in security concepts of AAD such as the RBAC model. Graph API is one of the things that moves AAD from a purpose-built Directory Service (such as the on-premise AD) to a platform that allows you to flexibly connect your enterprise – the users, the things, the applications.

© 2014 Martin Kuppinger, KuppingerCole