04.12.2013 by Martin Kuppinger
In various discussions over the past month, mainly in the context of Privilege Management, I raised the (somewhat provocative) claim that shared accounts are a bad thing per se and that we must avoid these accounts. The counterargument I got, though, was that sometimes it is just impossible to do so.
There were various examples. One is that users in production environments need a functional account to quickly access PCs and perform some tasks. Another is that such technical user accounts are required when building n-tier applications to, for instance, access databases. Administrators commonly tend to groan when approaches for avoiding the use of shared accounts such as root are considered.
There are many more examples, but when you look at reality there are sufficient examples and reasons of how it is possible to avoid shared accounts (or at least their use). In many healthcare environments, fast user switching has been used for years now. The strict regulations in this sector frequently have led to implementing Enterprise Single Sign-On tools that allow for rapid authentication and access to applications with an individual account. These solutions frequently have replaced previously used shared functional accounts. So why shouldn’t they work in other environments as well?
When looking at n-tier applications, it is worth it to dive somewhat deeper into end-to-end security. There are many ways to implement end-to-end security. Standards such as OAuth 2.0 make it far easier to implement such concepts. Provisioning tools have supported database systems and other systems for a number of years. Oracle has just “re-invented” database security in its Oracle Database 12c, with tight integration into IAM (Identity and Access Management). Aside from the argument that end-to-end security just does not work (which is wrong), I sometimes hear the argument that this is too complex to do. I don’t think so. It is different to do. It requires a well-thought-out Application Security Infrastructure, something I was writing about years ago. It requires changing the way software architecture and software development are done. But in many, many cases technical accounts are primarily used due to convenience reasons – architects and developers just do not want to consider alternative solutions. And then there always is the “killer argument” of time to market, which is not necessarily valid.
When I look at administrators, I know about many scenarios where root or Windows Administrator accounts are rarely used, except for firefighting operations. The administrators and operators instead rely on functionally restricted, personal accounts they use aside of their other personal accounts they use for standard operations such as eMail access. That works well and it does not hinder them from doing a good job in administration and operations. But it requires thoroughly thinking about the concept for these accounts.
So there are many good reasons to get rid of shared accounts, but few, if any, valid ones to continue using them. Given that these accounts are amongst the single biggest security risks, it is worth starting to rethink their use and openly consider alternative solutions. Privilege Management tools are just helping with the symptoms. It is time to start addressing the cause of this security risk.
Have a look at our KuppingerCole reports. We will publish a new Leadership Compass on Privilege Management soon. Given that shared accounts are a reality and will not disappear quickly, you might need a tool to better secure these. Have a look at the new report, which will help you selecting the right vendor for your challenges.
03.12.2013 by Martin Kuppinger
Last week, the German BSI (Bundesamt für Sicherheit in der Informationstechnik, the Federal Office for IT Security), published a document named “ICS-Security-Kompendium”. ICS stands for “Industrial Control Systems”. This is the first comprehensive advisory document published by the German BSI on this topic so far. The BSI puts specific emphasis on two facts:
- ICS are widely used in critical infrastructures, e.g. utilities, transport, traffic control, etc.
- ICS are increasingly connected – there is no “air gap” anymore for many of these systems
It is definitely worth having a look at the document, because it provides an in-depth analysis of security risks, best practices for securing such infrastructures, and a methodology for ICS audits. Furthermore it has a chapter on upcoming trends such as the impact of the IoT (Internet of Things) and the so-called “Industry 4.0” and of Cloud architectures in industrial environments. Industry 4.0 stands for the 4th industrial revolution, where factories are organizing themselves – the factory of the future.
As much as I appreciate such publication, it lacks – from my perspective – an additional view of two major areas that are tightly connected to ICS security:
- Aside from the ICS systems, there is a lot more of IT in manufacturing environments that frequently is not in scope with the corporate IT Security and Information Security departments. Aside from attacks to such systems, for instance in the area of PLM/PDM (Product Lifecycle/Data Management), there are standard PCs that might serve as entry point for attacks.
- This directly leads to the second aspect: It is not only about technical security, but about re-thinking the organizational approach to Information Security in all areas within an organization, i.e. a holistic view on all IT and information. Separating ICS and manufacturing IT from the “business IT” does not make sense.
The latter becomes clear when looking at new business cases such as the connected vehicle, smart metering, or simply remote control of HVAC (heating, ventilation, and air conditioning) and other systems in households (or industry). In all these scenarios, there are new business cases that lead to connecting both sides of IT.
Also have a look at our KuppingerCole research on these issues, such as the KuppingerCole report on critical infrastructures in finance industry (not about iCS) and the KuppingerCole report on managing risks to critical infrastructure.
26.11.2013 by Martin Kuppinger
It has been somewhat quiet around IBM’s IAM offering for the past few years. Having been one of the first large vendors entering that market, other vendors had overhauled IBM, being more innovative and setting the pace in this still emerging market.
This seems to be over now and IBM is showing up amongst the IAM leaders again. Since IBM launched its IBM Security division as part of their software business and moved the IAM product from the Tivoli division into that new division, things have changed. The IBM Security division not only is responsible for the IAM products, but a number of other offerings such as the QRadar products.
IBM has defined an IAM strategy that brings together their capabilities in Security Intelligence – such as the IBM X-Force services and the QRadar products – with IAM. The core of IAM still is formed by familiar products (if you replace “Tivoli” with “Security”), such as the IBM Security Access Manager, the IBM Security Directory Integrator, the IBM Security Identity Manager, and others. However, IBM has put a lot of work in these products to improve them and to make them leading-edge (again, in some cases).
There have been four recent announcements. One is the IBM Security Access Manager for Mobile, an appliance that allows managing mobile access, provides SSO services and risk- and context-aware access, based on information such as the IP reputation – that is where, for instance, IBM X-Force comes into play.
IBM has also introduced their own Privilege Management solution, IBM Security Privileged Identity Manager, to manage shared accounts and add strong authentication. The interesting piece there is the tight integration with QRadar to analyze real-time activity of privileged identity use.
The third major announcement is what IBM calls the IBM Security Directory Server and Integrator. Here they bring together Directory Services and Identity Federation – plus QRadar integration. Integrating federation and directory services allows managing more identities, such as external users, as well as reaching out to Cloud services.
Finally, IBM has extended their IBM Security Identity Manager – the former Tivoli Identity Manager – and added advanced analytical capabilities as well as integration with QRadar security intelligence. The latter allows for better analysis of real-time attacks and fraud detection. While such integration is not entirely new, if you look for instance at NetIQ Identity Manager and Sentinel integration, it highlights the fact that IBM is moving forward with its IAM offerings rather quickly now, showing innovation in various areas and having a clear execution strategy.
I always appreciate strong competitors in a market – it helps drive innovation, which is good for the customers. The IBM investment in IAM is also a good indicator of the relevance of the market segment itself – IAM is one of the key elements for Information Security. IBM’s strategy also aligns well with my view, that IAM is just one part of what you need for Information Security. Integration beyond the core IAM capabilities is needed. So, in light of IBM’s current news around IAM, I think it is worth having a closer look at them again.
22.11.2013 by Martin Kuppinger
During the last few months, we have seen – especially here in Europe – a massive increase in demand for methods to securely share information, beyond the Enterprise. The challenge is not new. I have blogged about this several times, for instance here and here.
While there have been offerings for Information Rights Management or Enterprise Rights Management for many years – from vendors such as Microsoft, Adobe, Documentum or Oracle, plus some smaller players such as Seclore – we are seeing a lot of action on that front these days.
The most important one clearly is the general availability of Microsoft Azure RMS (Rights Management Services), with some new whitepapers available. I have blogged about this offering before, and this clearly is a game changer for the entire market not only of rights management, but the underlying challenge of Secure Information Sharing. Microsoft also has built an ecosystem of partners that provide additional capabilities, including vendors such as Watchful Software or Secude, the latter with a deep SAP integration to protect documents that are exported from SAP. And these are just two in a remarkably long list of partners that help Microsoft in making Azure RMS ready for the heterogeneous IT environments customers have today.
Aside of the Microsoft Azure RMS ecosystem, some other players are pushing solutions into the market that can work rather independently, somewhat more the way Seclore does. Two vendors to mention here are Nextlabs and Covertix. These are interesting options, especially (but not only) when there is a need for rapid, tactical solutions.
Other vendors that are worth a look in this market for Secure Information Sharing include Brainloop and Grau Data. Both are German vendors, but there are other solutions available in other countries and regions. These focus primarily on providing a space to exchange data, while the others mentioned above focus more on data flowing rather freely, by protecting these documents and their use “in motion” and “in use”.
The current momentum – and the current demand – are clear indicators for a fundamental shift we see in Information Security and for Information Stewardship. In fact, all these solutions focus on enabling information sharing and allow users to share information in a secure but controlled way. This is in stark contrast to the common approach within IAM (Identity and Access Management) and IAG (Identity and Access Governance), where the focus is on restricting access.
Secure Information Sharing enables sharing, while the common approaches restrict access to information on particular systems. So it is about enabling versus restricting, but also about an information-centric approach (protect information that is shared) versus a system-centric concept (restrict access to information that resides on particular systems).
With the number of solutions available today, from point solutions to a comprehensive platform with broad support for heterogeneous environments – Microsoft Azure RMS – there are sufficient options for organizations to move forward towards Secure Information Sharing and enabling business users to do their job while keeping Governance, Compliance, and Information Risks in mind. Regardless of the business case, there are solutions available now for Secure Information Sharing.
It is time now for organizations to define a strategy for Secure Information Sharing and to move beyond restricting access. More on this at EIC Munich 2014.
14.11.2013 by Martin Kuppinger
In a recent SAP Insider article, SAP unveiled some interesting news around security auditing and information protection. In SAP NetWeaver Application Server (AS) ABAP 7.40 they included a new functionality called Read Access Logging (RAL). The current version supports Web Dynpro ABAP, web service, and RFC calls. Support for ABAP Dynpro is planned for a later release. SAP also has announced availability for release 7.31 near-time and is planning further “downports” to earlier versions.
What does this feature provide? RAL allows you to log access to defined sensitive data in these systems, as well as to define which access shall be logged. The configuration of logging is rather flexible. Logs then can be searched and viewed to analyze access to the information that is monitored.
However, RAL does not support automated analysis of the collected information. The logical next step would be to automatically act on this data, by analyzing it and identifying signs of fraud. Given that SAP has technology to do that in place – just think about SAP HANA as a platform for such analytics and SAP Fraud Management as a solution that allows you to deal with fraud – this would help customers to really have a solution on hand.
Despite this gap – it’s not about logging, but about making use of log data – this is an interesting feature for Information Security and SAP Security and worth to evaluate in detail.
Posted in SAP
14.10.2013 by Martin Kuppinger
One of the challenges many organizations are facing in their IAM infrastructure is “Identity Information Quality”. That quality, especially in larger organizations, varies depending on the source it comes from. This challenge is not limited to the enrollment process, but also all subsequent processes. While the creation of new digital identities in IAM systems (at least for employees) is frequently driven primarily through imports from HR systems, changes of attribute values might be triggered from many different sources.
Many organizations spend a lot of time and money to improve HR processes to achieve a higher level of Identity Information Quality. That clearly makes sense, especially in the context of HR standardization initiatives. However, even the best processes will not deliver perfect Identity Information Quality.
So the question is: Why not use the recertification capabilities of Access Governance tools to improve Identity Information Quality? Why not let the departmental manager or the user themselves recertify certain attributes? This would be just another type of recertification campaign. Recertification in Access Governance is here because the Access Management processes are error-prone. If these processes worked perfectly well, no one would need recertification. The same is true for digital identities and their attributes, i.e. for Identity Information Quality.
When looking at other types of digital identities such as the ones of partners and customers, organizations might need other approaches to improve Identity Information Quality. When it is about partners, self-certification and recertification by the contact persons of the business partners might work. However, there is no need for that where Identity Federation is used – in that case, it is the responsibility of the business partner’s organization to enforce Identity Information Quality.
In the case of consumers, the option of self-certification – the option to review “account information” – might be one approach. Depending on the sales model, key account managers also might recertify their accounts. Furthermore, there is an increasing number of openly available information sources such as Facebook that under specific circumstances allow access via Graph APIs. These can be used to verify identity information.
But back to the employees: to me, it appears just logical to recertify the identity and not only the access information.
10.10.2013 by Martin Kuppinger
LG recently announced a new platform called GATE that will enable some LG business smartphones to run two mobile operating systems in parallel. LG appears, with this feature, to be reacting to the security concerns many organizations have around BYOD (Bring Your Own Device). Virtualization is one of the smartest options for enhancing the security of mobile devices, as we discussed in the KuppingerCole Advisory Note “BYOD”.
By virtualizing the smartphones and providing two segregated environments, users can access both their business and their private environment, with the business apps operating in a segregated and more secure way in concert with the business backend systems.
I personally like that approach, because it focuses on making the smartphone smart enough for BYOD. Together with additional features such as built-in and improved MDM (Mobile Device Management) support and VPN integration, LG is raising the bar for enterprise ready smartphones.
However, there is one question LG has left open as of now: which types of strong authentication are supported for access to the smartphone, particularly the business virtual machine? Clearly, segregation makes a lot of sense. But without adequate strong authentication, there is still a security gap.
Overall, it is good to see smartphone vendors making significant progress in security. The bad thing about this is that they should have started with that security evolution years ago. But better late then never.
30.09.2013 by Martin Kuppinger
In Azure Active Directory (AAD) there is a Graph API. This is the main API to access AAD. The idea of a Graph API is not entirely new. The one provided by Facebook is already well established. But what is this really about and why does AAD provide such an API?
First of all, I neither like the term “Graph API” nor “API” itself very much. Both are, from my perspective, far too technical. They are fine for people with a good background in mathematics and computer science, but not for typical business people. A graph is a mathematical concept describing nodes and their connections. The structure of AAD can be understood as a graph. To navigate this graph, there is an API (Application Programming Interface) – the Graph API.
So the AAD Graph API is the interface for navigating the content of AAD (walking the tree, or, more correctly, the graph) and accessing (and creating and manipulating) the information stored therein. Developers can perform CRUD (Create, Read, Update, Delete) operations through REST (Representational State Transfer) API endpoints when developing applications such as web applications and mobile apps – as well as more conventional business processes.
It comes as no surprise then that the Graph API is REST-based. REST is the de facto standard for new types of APIs. It is rather simple to use, especially when compared with traditional methods for directory access such as the LDAP C API (yes, it always depends on what you compare something with…).
The Graph API of Azure AD provides a broad set of standard queries that can be used to retrieve metadata information about the tenant’s directory and its data structure, but also about users, groups, and other common entities. Apart from these standard queries, there are so-called differential queries that allow developers to request only the changes that have happened on the result set of the query since the previous query run. This is very interesting for applications that need to synchronize AAD and other data stores.
Access to the Graph API is done in two steps. The first one is the authentication (based on tenant-ID, client-ID and credentials), which is done against the Windows AAD authentication service. The authentication service returns a JWT Token. This token then can be used for running Graph API queries. The Graph API relies on an RBAC (Role Based Access Control) model. It authorizes every request and returns the result set if the authorization has been successful.
Overall, the Graph API is a simple yet powerful concept for accessing content of the AAD. It is the successor to traditional approaches for directory access such as LDAP with its rather complex structure (which is simplified by ADSI, ADO .NET, etc.). Being based on REST, it is a familiar approach for web developers. There is a lot of information already available at the MSDN (Microsoft Developer Network) website.
From the perspective of a non-developer, the most important thing to understand is that it is far easier than ever before to build applications that rely on the AD – or, more particularly, on the AAD. All the information about the employees, business partners, and customers that organizations may hold in the AAD in future is accessible through the Graph API for new types of applications, from integration of that information into business processes to simple mobile apps providing, for instance, customer information out of the AAD. This is done in a secure way, based on the built-in security concepts of AAD such as the RBAC model. Graph API is one of the things that moves AAD from a purpose-built Directory Service (such as the on-premise AD) to a platform that allows you to flexibly connect your enterprise – the users, the things, the applications.
23.09.2013 by Martin Kuppinger
Some time ago Microsoft unveiled its Azure Active Directory (AAD). During recent weeks, I have had several discussions about what AAD is. First of all: It is not just an on-premise AD ported to Azure and run as a Cloud service. Despite relying in its inner areas on proven AD technology, it differs greatly from on-premise AD. It is a new concept, going well beyond a classical directory service and integrating support for Identity Federation and Cloud Access/Authorization Management.
In fact you can use three flavors of AD today:
- The classical on-premise AD
- The on-premise AD running on Azure in virtual servers
- The AAD
The second variant is rather unknown but might be interesting in some use cases, when organizations want to optimize their existing on-premise AD infrastructure.
However, the most interesting element in this family of ADs clearly is AAD. AAD is a new type of thing that is“more than a directory service”. It can integrate with existing AD infrastructures, best by using Identity Federation based on ADFS (Active Directory Federation Services) and SAML v2 as a protocol.
It then is a service that runs as a real Cloud service:
The latter aspect is proven and looking at it provides some additional insight into AAD. AAD was out long before it became publically available. It is the directory service in use by both Microsoft Office 365 and Microsoft InTune.
When looking at existing AD infrastructures, there are some common challenges that AAD can address, aside from running as a Cloud service (there you also could use the on-premise AD running on Azure).
One of the common challenges for AD are schema changes. Schema changes are a nightmare for many administrators. They can’t be rolled back. And they might affect the AD performance. Thus, many administrators are extremely reluctant to make any schema changes. AAD solves this with its flexible, extensible data model. This data model has left the LDAP/X.500 history that still is visible in AD. Thus it comes to no surprise that the primary means of access to AAD is not via LDAP (which is not even supported out-of-the-box yet) but through the REST-based Graph API.
The second common challenge AD administrators are facing (amongst some others…) is the management of external users. Many organizations have implemented some approach to managing such externals, for instance in separate domains or at least subtrees. However, these approaches are not easy to define and implement from both a security and a scalability perspective. It might work rather well for some business partners and sub-contractors that need access for a longer period of time. However, onboarding thousands of employees of new business partners (for instance for sales to new target groups or in new regions) is something the on-premise AD is not ideally suited for.
AAD is built for that. It not only can scale flexibly – think about millions of customers instead of some thousand or tens of thousands employees or business partners – but it also supports federation by design. It can federate both inward and outward. In other words (and as mentioned above): It is not only a directory service but a federation platform.
And even more: It also is a tool that you can use to manage access to other Cloud Services. It can act as authorization services for these, when federating with them. Based on policies, access to such services can be managed and restricted.
So there is a lot more in AAD than in the on-premise AD. AAD is the logical extension of AD for the “Extended Enterprise” or “Connected Enterprise” – whichever term you chose. It allows managing external users far more simply while being massively scalable. It allows managing access to Cloud services. And it still behaves well in conjunction with existing on-premise AD environments.
There are alternatives to AAD in the market. However, AAD is one of the Identity Cloud services worth having a deeper look at. The most important thing you should do when looking at AAD is to accept and understand that this is far more than the on-premise AD ported to the Cloud.
I will cover some aspects of AAD and the surrounding (and growing) ecosystem in upcoming blog posts.
11.08.2013 by Martin Kuppinger
Information Rights Management is the discipline within Information Security and IAM (Identity and Access Management) that allows protecting information right at the source: The single file. Files are encrypted and permissions for using the files are directly applied to the encrypted and packaged file.
This allows protection of documents across their entire lifecycle: At rest, in motion, and in use. Other Information Security technologies might only protect files at rest. Classical file server security can enforce access rights. However, once a user has access, he can do with that file whatever he wants to do. Other technologies protect the file transfer. But all of them fail in securing information across the entire lifecycle. That is where Information Rights Management comes into play.
Information Rights Management – more important than ever before
Information Rights Management (IRM) is more important than ever before. An increasing number of attacks against both on-premise and Cloud IT infrastructures and the uncertainty and concerns regarding the access of governmental agencies to data sent over the Internet and held in the Cloud are driving the need for better Information Security approaches that protect information throughout their lifecycle. In addition, there is an ever-growing number of regulations regarding Privacy, the protection of Intellectual Properties, etc.
Information Rights Management is the logical solution for these challenges, as long as documents are concerned, because – as mentioned above – it protects information at rest, in motion, and in use. This depends on the types of applications, requiring applications with built-in support for Information Rights Management or workarounds that at least inhibit certain operations such as printing.
Clearly, Information Rights Management also has its limits. The person photographing the screen still can bypass security. However, using Information Rights Management on a large scale would mean a big step forward for Information Security.
IRM: Not new – so why haven’t we already seen a breakthrough?
Given that IRM is such a logical approach to use for improving Information Security, the obvious question is: Why don’t we already use it? There are several offerings from various vendors, but we are far away from widespread adoption.
There are many reasons for that. The most important ones, so far, have been a lack of broad support for various file formats and applications, issues in dealing with external users that need to consume information, and the complexity of implementation. There have been other challenges, but these three are the most relevant ones.
Microsoft to remove the IRM inhibitors
Microsoft, one of the vendors that has been active for years now in the IRM market, is now tackling these inhibitors. The Microsoft RMS (Microsoft Rights Management Services) have been re-designed and enhanced. The Microsoft promise is that “Microsoft RMS enables the flow of protected data on all important devices, of all important file types, and lets these files be used by all important people in a user’s collaboration circle”. Another important capability is what Microsoft calls BYOK – Bring Your Own Key. Companies can manage their own keys in their own HSM (Hardware Security Module) on-premise, however the HSM can be asked to perform operations using that key. This is a complex topic I will cover more in depth in another post. There is also a broad range of implementation models, from doing everything in the cloud to more “cloud hesitant” approaches, serving the needs and addressing the concerns of various types of customers.
The Microsoft Rights Management suite is implemented as a Windows Azure service. By moving IRM to the Cloud, Microsoft enables flexible collaboration between various parties, beyond the traditional perimeter of the enterprise. Companies can flexibly collaborate with their business partners.
Moving RMS to the Cloud might raise security concerns. However, the documents themselves are never seen by the Azure RMS service. Azure RMS is responsible for secure key exchange between the involved client devices. It is responsible for requesting authentication and authorization information. This is done by relying on either the federated on-premise AD or Windows Azure AD. Other Identity Providers will be added over time, including Microsoft Account (aka LiveID) and Google IDs. Furthermore, Windows Azure AD provides flexibility for federating with external parties.
This flexibility is also the answer to the challenge of supporting all users within a collaboration circle. Windows Azure RMS does not rely on the on-premise Active Directory (and ADFS-based federation) solely, but is far more flexible in onboarding and managing RMS users. Users from external partners can self-sign-on once they receive an RMS-protected document.
The second challenge always has been the management of file types and applications. Microsoft RMS supports “RMS-enlightened applications” (i.e. ones that have built-in support for RMS), a free RMS App that runs on various operating system platforms and supports various standard formats such as JPG, TXT, and XML, and finally a wrapping approach to protect file types that are not supported by the other two approaches. Furthermore, Microsoft has started building a significant ecosystem with various partners supporting environments such as CAD systems or documents exported from SAP environments. Based on these changes, RMS works well on a broad range of devices and for all relevant file types, including native support for the PDF format in the Microsoft-provided PDF reader.
With Azure RMS and all the new features in Microsoft RMS setup and management of RMS becomes far easier than ever before – including policy management and usability for end users.
Thus, Microsoft provides answers to all three challenges mentioned at the beginning of this note: Dealing with all types of users; dealing with all types of file formats and applications; and reducing the complexity of IRM and specifically their own RMS.
There are some good sources for further information:
Have a look at these. From my perspective, it is well worth spending time on evaluating the new Microsoft RMS and Windows Azure RMS. I see a strong opportunity for the breakthrough of IRM as a technology with mass adoption.
This is only my first post on this subject, further posts will follow.