22.11.2013 by Martin Kuppinger
During the last few months, we have seen – especially here in Europe – a massive increase in demand for methods to securely share information, beyond the Enterprise. The challenge is not new. I have blogged about this several times, for instance here and here.
While there have been offerings for Information Rights Management or Enterprise Rights Management for many years – from vendors such as Microsoft, Adobe, Documentum or Oracle, plus some smaller players such as Seclore – we are seeing a lot of action on that front these days.
The most important one clearly is the general availability of Microsoft Azure RMS (Rights Management Services), with some new whitepapers available. I have blogged about this offering before, and this clearly is a game changer for the entire market not only of rights management, but the underlying challenge of Secure Information Sharing. Microsoft also has built an ecosystem of partners that provide additional capabilities, including vendors such as Watchful Software or Secude, the latter with a deep SAP integration to protect documents that are exported from SAP. And these are just two in a remarkably long list of partners that help Microsoft in making Azure RMS ready for the heterogeneous IT environments customers have today.
Aside of the Microsoft Azure RMS ecosystem, some other players are pushing solutions into the market that can work rather independently, somewhat more the way Seclore does. Two vendors to mention here are Nextlabs and Covertix. These are interesting options, especially (but not only) when there is a need for rapid, tactical solutions.
Other vendors that are worth a look in this market for Secure Information Sharing include Brainloop and Grau Data. Both are German vendors, but there are other solutions available in other countries and regions. These focus primarily on providing a space to exchange data, while the others mentioned above focus more on data flowing rather freely, by protecting these documents and their use “in motion” and “in use”.
The current momentum – and the current demand – are clear indicators for a fundamental shift we see in Information Security and for Information Stewardship. In fact, all these solutions focus on enabling information sharing and allow users to share information in a secure but controlled way. This is in stark contrast to the common approach within IAM (Identity and Access Management) and IAG (Identity and Access Governance), where the focus is on restricting access.
Secure Information Sharing enables sharing, while the common approaches restrict access to information on particular systems. So it is about enabling versus restricting, but also about an information-centric approach (protect information that is shared) versus a system-centric concept (restrict access to information that resides on particular systems).
With the number of solutions available today, from point solutions to a comprehensive platform with broad support for heterogeneous environments – Microsoft Azure RMS – there are sufficient options for organizations to move forward towards Secure Information Sharing and enabling business users to do their job while keeping Governance, Compliance, and Information Risks in mind. Regardless of the business case, there are solutions available now for Secure Information Sharing.
It is time now for organizations to define a strategy for Secure Information Sharing and to move beyond restricting access. More on this at EIC Munich 2014.
14.11.2013 by Martin Kuppinger
In a recent SAP Insider article, SAP unveiled some interesting news around security auditing and information protection. In SAP NetWeaver Application Server (AS) ABAP 7.40 they included a new functionality called Read Access Logging (RAL). The current version supports Web Dynpro ABAP, web service, and RFC calls. Support for ABAP Dynpro is planned for a later release. SAP also has announced availability for release 7.31 near-time and is planning further “downports” to earlier versions.
What does this feature provide? RAL allows you to log access to defined sensitive data in these systems, as well as to define which access shall be logged. The configuration of logging is rather flexible. Logs then can be searched and viewed to analyze access to the information that is monitored.
However, RAL does not support automated analysis of the collected information. The logical next step would be to automatically act on this data, by analyzing it and identifying signs of fraud. Given that SAP has technology to do that in place – just think about SAP HANA as a platform for such analytics and SAP Fraud Management as a solution that allows you to deal with fraud – this would help customers to really have a solution on hand.
Despite this gap – it’s not about logging, but about making use of log data – this is an interesting feature for Information Security and SAP Security and worth to evaluate in detail.
Posted in SAP
14.10.2013 by Martin Kuppinger
One of the challenges many organizations are facing in their IAM infrastructure is “Identity Information Quality”. That quality, especially in larger organizations, varies depending on the source it comes from. This challenge is not limited to the enrollment process, but also all subsequent processes. While the creation of new digital identities in IAM systems (at least for employees) is frequently driven primarily through imports from HR systems, changes of attribute values might be triggered from many different sources.
Many organizations spend a lot of time and money to improve HR processes to achieve a higher level of Identity Information Quality. That clearly makes sense, especially in the context of HR standardization initiatives. However, even the best processes will not deliver perfect Identity Information Quality.
So the question is: Why not use the recertification capabilities of Access Governance tools to improve Identity Information Quality? Why not let the departmental manager or the user themselves recertify certain attributes? This would be just another type of recertification campaign. Recertification in Access Governance is here because the Access Management processes are error-prone. If these processes worked perfectly well, no one would need recertification. The same is true for digital identities and their attributes, i.e. for Identity Information Quality.
When looking at other types of digital identities such as the ones of partners and customers, organizations might need other approaches to improve Identity Information Quality. When it is about partners, self-certification and recertification by the contact persons of the business partners might work. However, there is no need for that where Identity Federation is used – in that case, it is the responsibility of the business partner’s organization to enforce Identity Information Quality.
In the case of consumers, the option of self-certification – the option to review “account information” – might be one approach. Depending on the sales model, key account managers also might recertify their accounts. Furthermore, there is an increasing number of openly available information sources such as Facebook that under specific circumstances allow access via Graph APIs. These can be used to verify identity information.
But back to the employees: to me, it appears just logical to recertify the identity and not only the access information.
10.10.2013 by Martin Kuppinger
LG recently announced a new platform called GATE that will enable some LG business smartphones to run two mobile operating systems in parallel. LG appears, with this feature, to be reacting to the security concerns many organizations have around BYOD (Bring Your Own Device). Virtualization is one of the smartest options for enhancing the security of mobile devices, as we discussed in the KuppingerCole Advisory Note “BYOD”.
By virtualizing the smartphones and providing two segregated environments, users can access both their business and their private environment, with the business apps operating in a segregated and more secure way in concert with the business backend systems.
I personally like that approach, because it focuses on making the smartphone smart enough for BYOD. Together with additional features such as built-in and improved MDM (Mobile Device Management) support and VPN integration, LG is raising the bar for enterprise ready smartphones.
However, there is one question LG has left open as of now: which types of strong authentication are supported for access to the smartphone, particularly the business virtual machine? Clearly, segregation makes a lot of sense. But without adequate strong authentication, there is still a security gap.
Overall, it is good to see smartphone vendors making significant progress in security. The bad thing about this is that they should have started with that security evolution years ago. But better late then never.
30.09.2013 by Martin Kuppinger
In Azure Active Directory (AAD) there is a Graph API. This is the main API to access AAD. The idea of a Graph API is not entirely new. The one provided by Facebook is already well established. But what is this really about and why does AAD provide such an API?
First of all, I neither like the term “Graph API” nor “API” itself very much. Both are, from my perspective, far too technical. They are fine for people with a good background in mathematics and computer science, but not for typical business people. A graph is a mathematical concept describing nodes and their connections. The structure of AAD can be understood as a graph. To navigate this graph, there is an API (Application Programming Interface) – the Graph API.
So the AAD Graph API is the interface for navigating the content of AAD (walking the tree, or, more correctly, the graph) and accessing (and creating and manipulating) the information stored therein. Developers can perform CRUD (Create, Read, Update, Delete) operations through REST (Representational State Transfer) API endpoints when developing applications such as web applications and mobile apps – as well as more conventional business processes.
It comes as no surprise then that the Graph API is REST-based. REST is the de facto standard for new types of APIs. It is rather simple to use, especially when compared with traditional methods for directory access such as the LDAP C API (yes, it always depends on what you compare something with…).
The Graph API of Azure AD provides a broad set of standard queries that can be used to retrieve metadata information about the tenant’s directory and its data structure, but also about users, groups, and other common entities. Apart from these standard queries, there are so-called differential queries that allow developers to request only the changes that have happened on the result set of the query since the previous query run. This is very interesting for applications that need to synchronize AAD and other data stores.
Access to the Graph API is done in two steps. The first one is the authentication (based on tenant-ID, client-ID and credentials), which is done against the Windows AAD authentication service. The authentication service returns a JWT Token. This token then can be used for running Graph API queries. The Graph API relies on an RBAC (Role Based Access Control) model. It authorizes every request and returns the result set if the authorization has been successful.
Overall, the Graph API is a simple yet powerful concept for accessing content of the AAD. It is the successor to traditional approaches for directory access such as LDAP with its rather complex structure (which is simplified by ADSI, ADO .NET, etc.). Being based on REST, it is a familiar approach for web developers. There is a lot of information already available at the MSDN (Microsoft Developer Network) website.
From the perspective of a non-developer, the most important thing to understand is that it is far easier than ever before to build applications that rely on the AD – or, more particularly, on the AAD. All the information about the employees, business partners, and customers that organizations may hold in the AAD in future is accessible through the Graph API for new types of applications, from integration of that information into business processes to simple mobile apps providing, for instance, customer information out of the AAD. This is done in a secure way, based on the built-in security concepts of AAD such as the RBAC model. Graph API is one of the things that moves AAD from a purpose-built Directory Service (such as the on-premise AD) to a platform that allows you to flexibly connect your enterprise – the users, the things, the applications.
23.09.2013 by Martin Kuppinger
Some time ago Microsoft unveiled its Azure Active Directory (AAD). During recent weeks, I have had several discussions about what AAD is. First of all: It is not just an on-premise AD ported to Azure and run as a Cloud service. Despite relying in its inner areas on proven AD technology, it differs greatly from on-premise AD. It is a new concept, going well beyond a classical directory service and integrating support for Identity Federation and Cloud Access/Authorization Management.
In fact you can use three flavors of AD today:
- The classical on-premise AD
- The on-premise AD running on Azure in virtual servers
- The AAD
The second variant is rather unknown but might be interesting in some use cases, when organizations want to optimize their existing on-premise AD infrastructure.
However, the most interesting element in this family of ADs clearly is AAD. AAD is a new type of thing that is“more than a directory service”. It can integrate with existing AD infrastructures, best by using Identity Federation based on ADFS (Active Directory Federation Services) and SAML v2 as a protocol.
It then is a service that runs as a real Cloud service:
The latter aspect is proven and looking at it provides some additional insight into AAD. AAD was out long before it became publically available. It is the directory service in use by both Microsoft Office 365 and Microsoft InTune.
When looking at existing AD infrastructures, there are some common challenges that AAD can address, aside from running as a Cloud service (there you also could use the on-premise AD running on Azure).
One of the common challenges for AD are schema changes. Schema changes are a nightmare for many administrators. They can’t be rolled back. And they might affect the AD performance. Thus, many administrators are extremely reluctant to make any schema changes. AAD solves this with its flexible, extensible data model. This data model has left the LDAP/X.500 history that still is visible in AD. Thus it comes to no surprise that the primary means of access to AAD is not via LDAP (which is not even supported out-of-the-box yet) but through the REST-based Graph API.
The second common challenge AD administrators are facing (amongst some others…) is the management of external users. Many organizations have implemented some approach to managing such externals, for instance in separate domains or at least subtrees. However, these approaches are not easy to define and implement from both a security and a scalability perspective. It might work rather well for some business partners and sub-contractors that need access for a longer period of time. However, onboarding thousands of employees of new business partners (for instance for sales to new target groups or in new regions) is something the on-premise AD is not ideally suited for.
AAD is built for that. It not only can scale flexibly – think about millions of customers instead of some thousand or tens of thousands employees or business partners – but it also supports federation by design. It can federate both inward and outward. In other words (and as mentioned above): It is not only a directory service but a federation platform.
And even more: It also is a tool that you can use to manage access to other Cloud Services. It can act as authorization services for these, when federating with them. Based on policies, access to such services can be managed and restricted.
So there is a lot more in AAD than in the on-premise AD. AAD is the logical extension of AD for the “Extended Enterprise” or “Connected Enterprise” – whichever term you chose. It allows managing external users far more simply while being massively scalable. It allows managing access to Cloud services. And it still behaves well in conjunction with existing on-premise AD environments.
There are alternatives to AAD in the market. However, AAD is one of the Identity Cloud services worth having a deeper look at. The most important thing you should do when looking at AAD is to accept and understand that this is far more than the on-premise AD ported to the Cloud.
I will cover some aspects of AAD and the surrounding (and growing) ecosystem in upcoming blog posts.
11.08.2013 by Martin Kuppinger
Information Rights Management is the discipline within Information Security and IAM (Identity and Access Management) that allows protecting information right at the source: The single file. Files are encrypted and permissions for using the files are directly applied to the encrypted and packaged file.
This allows protection of documents across their entire lifecycle: At rest, in motion, and in use. Other Information Security technologies might only protect files at rest. Classical file server security can enforce access rights. However, once a user has access, he can do with that file whatever he wants to do. Other technologies protect the file transfer. But all of them fail in securing information across the entire lifecycle. That is where Information Rights Management comes into play.
Information Rights Management – more important than ever before
Information Rights Management (IRM) is more important than ever before. An increasing number of attacks against both on-premise and Cloud IT infrastructures and the uncertainty and concerns regarding the access of governmental agencies to data sent over the Internet and held in the Cloud are driving the need for better Information Security approaches that protect information throughout their lifecycle. In addition, there is an ever-growing number of regulations regarding Privacy, the protection of Intellectual Properties, etc.
Information Rights Management is the logical solution for these challenges, as long as documents are concerned, because – as mentioned above – it protects information at rest, in motion, and in use. This depends on the types of applications, requiring applications with built-in support for Information Rights Management or workarounds that at least inhibit certain operations such as printing.
Clearly, Information Rights Management also has its limits. The person photographing the screen still can bypass security. However, using Information Rights Management on a large scale would mean a big step forward for Information Security.
IRM: Not new – so why haven’t we already seen a breakthrough?
Given that IRM is such a logical approach to use for improving Information Security, the obvious question is: Why don’t we already use it? There are several offerings from various vendors, but we are far away from widespread adoption.
There are many reasons for that. The most important ones, so far, have been a lack of broad support for various file formats and applications, issues in dealing with external users that need to consume information, and the complexity of implementation. There have been other challenges, but these three are the most relevant ones.
Microsoft to remove the IRM inhibitors
Microsoft, one of the vendors that has been active for years now in the IRM market, is now tackling these inhibitors. The Microsoft RMS (Microsoft Rights Management Services) have been re-designed and enhanced. The Microsoft promise is that “Microsoft RMS enables the flow of protected data on all important devices, of all important file types, and lets these files be used by all important people in a user’s collaboration circle”. Another important capability is what Microsoft calls BYOK – Bring Your Own Key. Companies can manage their own keys in their own HSM (Hardware Security Module) on-premise, however the HSM can be asked to perform operations using that key. This is a complex topic I will cover more in depth in another post. There is also a broad range of implementation models, from doing everything in the cloud to more “cloud hesitant” approaches, serving the needs and addressing the concerns of various types of customers.
The Microsoft Rights Management suite is implemented as a Windows Azure service. By moving IRM to the Cloud, Microsoft enables flexible collaboration between various parties, beyond the traditional perimeter of the enterprise. Companies can flexibly collaborate with their business partners.
Moving RMS to the Cloud might raise security concerns. However, the documents themselves are never seen by the Azure RMS service. Azure RMS is responsible for secure key exchange between the involved client devices. It is responsible for requesting authentication and authorization information. This is done by relying on either the federated on-premise AD or Windows Azure AD. Other Identity Providers will be added over time, including Microsoft Account (aka LiveID) and Google IDs. Furthermore, Windows Azure AD provides flexibility for federating with external parties.
This flexibility is also the answer to the challenge of supporting all users within a collaboration circle. Windows Azure RMS does not rely on the on-premise Active Directory (and ADFS-based federation) solely, but is far more flexible in onboarding and managing RMS users. Users from external partners can self-sign-on once they receive an RMS-protected document.
The second challenge always has been the management of file types and applications. Microsoft RMS supports “RMS-enlightened applications” (i.e. ones that have built-in support for RMS), a free RMS App that runs on various operating system platforms and supports various standard formats such as JPG, TXT, and XML, and finally a wrapping approach to protect file types that are not supported by the other two approaches. Furthermore, Microsoft has started building a significant ecosystem with various partners supporting environments such as CAD systems or documents exported from SAP environments. Based on these changes, RMS works well on a broad range of devices and for all relevant file types, including native support for the PDF format in the Microsoft-provided PDF reader.
With Azure RMS and all the new features in Microsoft RMS setup and management of RMS becomes far easier than ever before – including policy management and usability for end users.
Thus, Microsoft provides answers to all three challenges mentioned at the beginning of this note: Dealing with all types of users; dealing with all types of file formats and applications; and reducing the complexity of IRM and specifically their own RMS.
There are some good sources for further information:
Have a look at these. From my perspective, it is well worth spending time on evaluating the new Microsoft RMS and Windows Azure RMS. I see a strong opportunity for the breakthrough of IRM as a technology with mass adoption.
This is only my first post on this subject, further posts will follow.
09.08.2013 by Martin Kuppinger
Some days ago I had a briefing with BMC Software on their new MyIT offering. MyIT is a self-service approach that enables end users to request services. It focuses on the user experience and tries to close the gap between the IT-centric view of services and the view business users have.
This aligns well with two areas of KuppingerCole research:
- One is the Future IT Paradigm by KuppingerCole, our definition of how we expect and recommend that IT organizations change in order to be able to deal with the changes in IT itself – the change from on-premise IT to hybrid models and an increasing portion of Cloud Computing and the overall influence of the Computing Troika (Cloud, Mobile, Social Computing).
- The other is what we call Assignment Management, i.e. an approach that allows managing not only access rights (as in Access Governance) but all types of assignments, including physical devices and flexible service requests for users.
We have various publications out on both of these topics. The Future IT Paradigm is covered extensively in the KuppingerCole report “Scenario: Understanding IT Service and Security Management”. The model described therein, now called the “Future IT Paradigm”, splits IT into three levels, with business service management on top, the management of services, information and security in the middle, and IT service production at the bottom. BMC MyIT fits well to the upper layer in this model. Another document worth reading when looking at that model is the KuppingerCole report “Scenario: The Future of IT Organizations”. That document describes how IT organizations have to adapt to the fundamental changes I have listed above.
The second area, Assignment Management, is covered in a whitepaper I wrote a while ago in which I outline the basic concept. It also is one of the investment areas for CIOs we have identified in our CIO GPS.
As mentioned, BMC MyIT aligns well with these concepts, by moving IT closer to the user. Instead of relying on IT-centric Service Catalogs and, in general, an IT-centric view, it is about translating this view into a user-centric perspective and making it accessible for everyday use by all users in a simple way.
BMC seems to push this strategic approach. Yesterday they announced BMC AppZone, based on an acquired technology, through the purchase of Partnerpedia. That allows companies to implement company app stores that again can be integrated with BMC MyIT. Such a move, i.e. allowing users to shop for apps when looking for IT services as a business user, is just so logical you wonder why it has been so long it reaching reality.
It will be interesting to see how BMC further executes on their strategy around AppZone and MyIT. I see a massive potential for both upselling to their existing customers (and leveraging existing investments) and approaching new customers. The biggest challenge as of now is that while the solution is designed to be ITSM agnostic, the standard integration of MyIT is currently limited to existing BMC backend infrastructure. Providing more standard integrations and simple, flexible interfaces for integration will be the critical success factor. If BMC solves that, this might enable them to be not only an IT Service Management, but also a Business Service Management leader.
08.08.2013 by Martin Kuppinger
I’m aware that this is a somewhat tangential post, as there is no relationship to our KuppingerCole topic of Mobile Security, but clearly it fits into the theme of the Computing Troika, i.e. the changes in Business and IT due to the impact of Cloud, Mobile, and Social Computing. However, the main purpose is to share some of my experiences with the Microsoft Surface RT I’ve been using for quite a while now.
I just upgraded to the Windows 8.1 Preview, which is a significant step forward for a simple reason: It includes Microsoft Outlook and I do not need to rely on either the Outlook Web App or the standard Mail app anymore. I will come back to this later.
What I do not like that much with my Surface is the angular design, in contrast to the smoother curves of an Apple iPad. However, my iPad has made it to the living room and is the device of choice for my wife now. I switched to the Surface due to the fact that the iPad was far too limited for my requirements in daily business use, especially when travelling. I need a strong tool for email, calendar, and task management. And I need the ability to not only read Word or PowerPoint documents in the proper formats but also to edit them, including the support for comments and the “track changes” mode of Word. I can do all that with my notebook, but when travelling this is a rather large device to lug around. Tablets are good when I know that I mainly will work on emails plus some other light work, or when I might want to use the Amazon Kindle app. Reading books on my notebook (one of these new convertibles) works, but the device is too heavyweight to really be convenient. So, in other words: I have been looking for a tablet that still provides my work environment. The “big” Surface Pro was not my choice due to its weight and height. I opted for the RT version.
This worked quite well for me. I like some of the features, such as having a classical Windows experience if I like and need to have it, or the USB port. This is so convenient. I remember back at EIC I had to quickly provide a presentation to the technicians. I just put in a USB stick and copied it from Microsoft SkyDrive. I can’t do that with an iPad. Dave Kearns sat beside me and just said: “Why doesn’t Microsoft promote that feature?” The other purpose I use the USB port for is attaching a mouse when I have to work more intensively. It just works.
I also like the fact that the device knows the distinction between a device and a user. This is an important security feature, especially for enterprise deployments – just think about machine and user certificates. I do not need it mandatorily, but it makes sense – and it even makes sense in the living room sometimes, if for instance the children should have limited access.
I feel comfortable with the screen resolution etc. The battery also lasts sufficiently long. Thus, there is – from my perspective – little need to switch to the Pro version of the Surface.
However, there are also some challenges. I do not have 3G support in the device. I solved this by using a mobile WiFi/3G router. This is very convenient, because it works for all of my mobile devices. I also miss one (but only one) app for Windows RT, which is the “Bahn” app provided by the German railway. I have it on my Windows Phone, but not yet for RT.
And then there has been the mail app. This clearly is not the best piece of software Microsoft ever created – it is closer to being one of the worst. After an update it just failed, because my folder structures on Office 365 are too complex for the app. But that has changed now: I have Outlook 2013 now, after upgrading to Windows 8.1 Preview. So I have a tablet with (close to) full Office 2013 capabilities, which makes a great tool for business travel and vacation.
Having been asked by several people about my experience with Microsoft Surface now (being one of the still rare users, obviously), I decided to share that experience in my blog.
05.08.2013 by Martin Kuppinger
When looking at the recent security news, there is one predominant theme: The NSA surveillance disclosure by Edward Snowden. There is some more news, but little “breaking news”. We might count the news about the SIM card flaw, however this seems to be less severe in reality than it was reported at first.
I will not comment much on the NSA issue. Both Dave Kearns and me here and here have touched on this topic. There are a lot of political discussions going on, with some accusing others of not telling the (whole) truth about what they knew. Interestingly, here in Germany the opposition is accusing the current government, even though they were in the government some years ago, thus being well aware of what has been going on at least since 2001. Clearly, this is not a topic for election campaigns and at least until now, it does not seem to be working out as such for the current opposition.
In addition, the reaction of Apple, Google, Microsoft and others did not surprise me. They are asking the US government to unveil more information about when they were urged to provide information to the NSA. That fits to what I have said from the very beginning: The entire thing is a business challenge, especially for US Cloud providers. Thus, they will create (some) political pressure. On the other hand: As long as there are no real alternatives to US-based Cloud services, not much will change. Maybe the shift from on-premise to the Cloud will slow down. However, over time the commotion will fizzle out.
Facebook usage in schools
Another news item that did not gain much attention is from Baden-Württemberg, the southwestern part of Germany I live in. The government of Baden-Württemberg has forbidden the use of Facebook for communication between teachers and their scholars. In some schools, Facebook has been used to communicate about homework and the results. However, this communication might include privacy-relevant contents. In addition, using Facebook mandatorily as a communications tool would force scholars into this social network. Thus, according to the order of the government of Baden-Württemberg (and in accord with the German privacy regulations), it is not allowed. As I’ve mentioned, there has been only little discussion in public about that – either the use has been rather limited or the decision has been widely accepted.
Teaching computer science in schools?
When talking about schools, there has been another news item. The German BITMi (Bundesverband IT-Mittelstand e.V.), the association of medium-sized IT businesses, demands that computer science becomes a required subject in German schools, starting rather early. Currently, it is optional in many schools and regions, and taught as a separate subject only in few grades, mainly in the higher grades. However, it is integral part of several courses in virtually all schools. Recently, Hamburg has decided to reduce the time spent on computer science.
There is some discussion about whether scholars really need to learn coding – which is part of Informatics as a separate subject, while the integral part focuses more on core competencies in using computers, the Internet, word processors, spreadsheets, etc. I think this can be discussed. However, I’d like to see some thorough education on IT security in schools, so that scholars understand this critical subject far better than they typically do today.