14.10.2013 by Martin Kuppinger
One of the challenges many organizations are facing in their IAM infrastructure is “Identity Information Quality”. That quality, especially in larger organizations, varies depending on the source it comes from. This challenge is not limited to the enrollment process, but also all subsequent processes. While the creation of new digital identities in IAM systems (at least for employees) is frequently driven primarily through imports from HR systems, changes of attribute values might be triggered from many different sources.
Many organizations spend a lot of time and money to improve HR processes to achieve a higher level of Identity Information Quality. That clearly makes sense, especially in the context of HR standardization initiatives. However, even the best processes will not deliver perfect Identity Information Quality.
So the question is: Why not use the recertification capabilities of Access Governance tools to improve Identity Information Quality? Why not let the departmental manager or the user themselves recertify certain attributes? This would be just another type of recertification campaign. Recertification in Access Governance is here because the Access Management processes are error-prone. If these processes worked perfectly well, no one would need recertification. The same is true for digital identities and their attributes, i.e. for Identity Information Quality.
When looking at other types of digital identities such as the ones of partners and customers, organizations might need other approaches to improve Identity Information Quality. When it is about partners, self-certification and recertification by the contact persons of the business partners might work. However, there is no need for that where Identity Federation is used – in that case, it is the responsibility of the business partner’s organization to enforce Identity Information Quality.
In the case of consumers, the option of self-certification – the option to review “account information” – might be one approach. Depending on the sales model, key account managers also might recertify their accounts. Furthermore, there is an increasing number of openly available information sources such as Facebook that under specific circumstances allow access via Graph APIs. These can be used to verify identity information.
But back to the employees: to me, it appears just logical to recertify the identity and not only the access information.
10.10.2013 by Martin Kuppinger
LG recently announced a new platform called GATE that will enable some LG business smartphones to run two mobile operating systems in parallel. LG appears, with this feature, to be reacting to the security concerns many organizations have around BYOD (Bring Your Own Device). Virtualization is one of the smartest options for enhancing the security of mobile devices, as we discussed in the KuppingerCole Advisory Note “BYOD”.
By virtualizing the smartphones and providing two segregated environments, users can access both their business and their private environment, with the business apps operating in a segregated and more secure way in concert with the business backend systems.
I personally like that approach, because it focuses on making the smartphone smart enough for BYOD. Together with additional features such as built-in and improved MDM (Mobile Device Management) support and VPN integration, LG is raising the bar for enterprise ready smartphones.
However, there is one question LG has left open as of now: which types of strong authentication are supported for access to the smartphone, particularly the business virtual machine? Clearly, segregation makes a lot of sense. But without adequate strong authentication, there is still a security gap.
Overall, it is good to see smartphone vendors making significant progress in security. The bad thing about this is that they should have started with that security evolution years ago. But better late then never.
30.09.2013 by Martin Kuppinger
In Azure Active Directory (AAD) there is a Graph API. This is the main API to access AAD. The idea of a Graph API is not entirely new. The one provided by Facebook is already well established. But what is this really about and why does AAD provide such an API?
First of all, I neither like the term “Graph API” nor “API” itself very much. Both are, from my perspective, far too technical. They are fine for people with a good background in mathematics and computer science, but not for typical business people. A graph is a mathematical concept describing nodes and their connections. The structure of AAD can be understood as a graph. To navigate this graph, there is an API (Application Programming Interface) – the Graph API.
So the AAD Graph API is the interface for navigating the content of AAD (walking the tree, or, more correctly, the graph) and accessing (and creating and manipulating) the information stored therein. Developers can perform CRUD (Create, Read, Update, Delete) operations through REST (Representational State Transfer) API endpoints when developing applications such as web applications and mobile apps – as well as more conventional business processes.
It comes as no surprise then that the Graph API is REST-based. REST is the de facto standard for new types of APIs. It is rather simple to use, especially when compared with traditional methods for directory access such as the LDAP C API (yes, it always depends on what you compare something with…).
The Graph API of Azure AD provides a broad set of standard queries that can be used to retrieve metadata information about the tenant’s directory and its data structure, but also about users, groups, and other common entities. Apart from these standard queries, there are so-called differential queries that allow developers to request only the changes that have happened on the result set of the query since the previous query run. This is very interesting for applications that need to synchronize AAD and other data stores.
Access to the Graph API is done in two steps. The first one is the authentication (based on tenant-ID, client-ID and credentials), which is done against the Windows AAD authentication service. The authentication service returns a JWT Token. This token then can be used for running Graph API queries. The Graph API relies on an RBAC (Role Based Access Control) model. It authorizes every request and returns the result set if the authorization has been successful.
Overall, the Graph API is a simple yet powerful concept for accessing content of the AAD. It is the successor to traditional approaches for directory access such as LDAP with its rather complex structure (which is simplified by ADSI, ADO .NET, etc.). Being based on REST, it is a familiar approach for web developers. There is a lot of information already available at the MSDN (Microsoft Developer Network) website.
From the perspective of a non-developer, the most important thing to understand is that it is far easier than ever before to build applications that rely on the AD – or, more particularly, on the AAD. All the information about the employees, business partners, and customers that organizations may hold in the AAD in future is accessible through the Graph API for new types of applications, from integration of that information into business processes to simple mobile apps providing, for instance, customer information out of the AAD. This is done in a secure way, based on the built-in security concepts of AAD such as the RBAC model. Graph API is one of the things that moves AAD from a purpose-built Directory Service (such as the on-premise AD) to a platform that allows you to flexibly connect your enterprise – the users, the things, the applications.
23.09.2013 by Martin Kuppinger
Some time ago Microsoft unveiled its Azure Active Directory (AAD). During recent weeks, I have had several discussions about what AAD is. First of all: It is not just an on-premise AD ported to Azure and run as a Cloud service. Despite relying in its inner areas on proven AD technology, it differs greatly from on-premise AD. It is a new concept, going well beyond a classical directory service and integrating support for Identity Federation and Cloud Access/Authorization Management.
In fact you can use three flavors of AD today:
- The classical on-premise AD
- The on-premise AD running on Azure in virtual servers
- The AAD
The second variant is rather unknown but might be interesting in some use cases, when organizations want to optimize their existing on-premise AD infrastructure.
However, the most interesting element in this family of ADs clearly is AAD. AAD is a new type of thing that is“more than a directory service”. It can integrate with existing AD infrastructures, best by using Identity Federation based on ADFS (Active Directory Federation Services) and SAML v2 as a protocol.
It then is a service that runs as a real Cloud service:
The latter aspect is proven and looking at it provides some additional insight into AAD. AAD was out long before it became publically available. It is the directory service in use by both Microsoft Office 365 and Microsoft InTune.
When looking at existing AD infrastructures, there are some common challenges that AAD can address, aside from running as a Cloud service (there you also could use the on-premise AD running on Azure).
One of the common challenges for AD are schema changes. Schema changes are a nightmare for many administrators. They can’t be rolled back. And they might affect the AD performance. Thus, many administrators are extremely reluctant to make any schema changes. AAD solves this with its flexible, extensible data model. This data model has left the LDAP/X.500 history that still is visible in AD. Thus it comes to no surprise that the primary means of access to AAD is not via LDAP (which is not even supported out-of-the-box yet) but through the REST-based Graph API.
The second common challenge AD administrators are facing (amongst some others…) is the management of external users. Many organizations have implemented some approach to managing such externals, for instance in separate domains or at least subtrees. However, these approaches are not easy to define and implement from both a security and a scalability perspective. It might work rather well for some business partners and sub-contractors that need access for a longer period of time. However, onboarding thousands of employees of new business partners (for instance for sales to new target groups or in new regions) is something the on-premise AD is not ideally suited for.
AAD is built for that. It not only can scale flexibly – think about millions of customers instead of some thousand or tens of thousands employees or business partners – but it also supports federation by design. It can federate both inward and outward. In other words (and as mentioned above): It is not only a directory service but a federation platform.
And even more: It also is a tool that you can use to manage access to other Cloud Services. It can act as authorization services for these, when federating with them. Based on policies, access to such services can be managed and restricted.
So there is a lot more in AAD than in the on-premise AD. AAD is the logical extension of AD for the “Extended Enterprise” or “Connected Enterprise” – whichever term you chose. It allows managing external users far more simply while being massively scalable. It allows managing access to Cloud services. And it still behaves well in conjunction with existing on-premise AD environments.
There are alternatives to AAD in the market. However, AAD is one of the Identity Cloud services worth having a deeper look at. The most important thing you should do when looking at AAD is to accept and understand that this is far more than the on-premise AD ported to the Cloud.
I will cover some aspects of AAD and the surrounding (and growing) ecosystem in upcoming blog posts.
11.08.2013 by Martin Kuppinger
Information Rights Management is the discipline within Information Security and IAM (Identity and Access Management) that allows protecting information right at the source: The single file. Files are encrypted and permissions for using the files are directly applied to the encrypted and packaged file.
This allows protection of documents across their entire lifecycle: At rest, in motion, and in use. Other Information Security technologies might only protect files at rest. Classical file server security can enforce access rights. However, once a user has access, he can do with that file whatever he wants to do. Other technologies protect the file transfer. But all of them fail in securing information across the entire lifecycle. That is where Information Rights Management comes into play.
Information Rights Management – more important than ever before
Information Rights Management (IRM) is more important than ever before. An increasing number of attacks against both on-premise and Cloud IT infrastructures and the uncertainty and concerns regarding the access of governmental agencies to data sent over the Internet and held in the Cloud are driving the need for better Information Security approaches that protect information throughout their lifecycle. In addition, there is an ever-growing number of regulations regarding Privacy, the protection of Intellectual Properties, etc.
Information Rights Management is the logical solution for these challenges, as long as documents are concerned, because – as mentioned above – it protects information at rest, in motion, and in use. This depends on the types of applications, requiring applications with built-in support for Information Rights Management or workarounds that at least inhibit certain operations such as printing.
Clearly, Information Rights Management also has its limits. The person photographing the screen still can bypass security. However, using Information Rights Management on a large scale would mean a big step forward for Information Security.
IRM: Not new – so why haven’t we already seen a breakthrough?
Given that IRM is such a logical approach to use for improving Information Security, the obvious question is: Why don’t we already use it? There are several offerings from various vendors, but we are far away from widespread adoption.
There are many reasons for that. The most important ones, so far, have been a lack of broad support for various file formats and applications, issues in dealing with external users that need to consume information, and the complexity of implementation. There have been other challenges, but these three are the most relevant ones.
Microsoft to remove the IRM inhibitors
Microsoft, one of the vendors that has been active for years now in the IRM market, is now tackling these inhibitors. The Microsoft RMS (Microsoft Rights Management Services) have been re-designed and enhanced. The Microsoft promise is that “Microsoft RMS enables the flow of protected data on all important devices, of all important file types, and lets these files be used by all important people in a user’s collaboration circle”. Another important capability is what Microsoft calls BYOK – Bring Your Own Key. Companies can manage their own keys in their own HSM (Hardware Security Module) on-premise, however the HSM can be asked to perform operations using that key. This is a complex topic I will cover more in depth in another post. There is also a broad range of implementation models, from doing everything in the cloud to more “cloud hesitant” approaches, serving the needs and addressing the concerns of various types of customers.
The Microsoft Rights Management suite is implemented as a Windows Azure service. By moving IRM to the Cloud, Microsoft enables flexible collaboration between various parties, beyond the traditional perimeter of the enterprise. Companies can flexibly collaborate with their business partners.
Moving RMS to the Cloud might raise security concerns. However, the documents themselves are never seen by the Azure RMS service. Azure RMS is responsible for secure key exchange between the involved client devices. It is responsible for requesting authentication and authorization information. This is done by relying on either the federated on-premise AD or Windows Azure AD. Other Identity Providers will be added over time, including Microsoft Account (aka LiveID) and Google IDs. Furthermore, Windows Azure AD provides flexibility for federating with external parties.
This flexibility is also the answer to the challenge of supporting all users within a collaboration circle. Windows Azure RMS does not rely on the on-premise Active Directory (and ADFS-based federation) solely, but is far more flexible in onboarding and managing RMS users. Users from external partners can self-sign-on once they receive an RMS-protected document.
The second challenge always has been the management of file types and applications. Microsoft RMS supports “RMS-enlightened applications” (i.e. ones that have built-in support for RMS), a free RMS App that runs on various operating system platforms and supports various standard formats such as JPG, TXT, and XML, and finally a wrapping approach to protect file types that are not supported by the other two approaches. Furthermore, Microsoft has started building a significant ecosystem with various partners supporting environments such as CAD systems or documents exported from SAP environments. Based on these changes, RMS works well on a broad range of devices and for all relevant file types, including native support for the PDF format in the Microsoft-provided PDF reader.
With Azure RMS and all the new features in Microsoft RMS setup and management of RMS becomes far easier than ever before – including policy management and usability for end users.
Thus, Microsoft provides answers to all three challenges mentioned at the beginning of this note: Dealing with all types of users; dealing with all types of file formats and applications; and reducing the complexity of IRM and specifically their own RMS.
There are some good sources for further information:
Have a look at these. From my perspective, it is well worth spending time on evaluating the new Microsoft RMS and Windows Azure RMS. I see a strong opportunity for the breakthrough of IRM as a technology with mass adoption.
This is only my first post on this subject, further posts will follow.
09.08.2013 by Martin Kuppinger
Some days ago I had a briefing with BMC Software on their new MyIT offering. MyIT is a self-service approach that enables end users to request services. It focuses on the user experience and tries to close the gap between the IT-centric view of services and the view business users have.
This aligns well with two areas of KuppingerCole research:
- One is the Future IT Paradigm by KuppingerCole, our definition of how we expect and recommend that IT organizations change in order to be able to deal with the changes in IT itself – the change from on-premise IT to hybrid models and an increasing portion of Cloud Computing and the overall influence of the Computing Troika (Cloud, Mobile, Social Computing).
- The other is what we call Assignment Management, i.e. an approach that allows managing not only access rights (as in Access Governance) but all types of assignments, including physical devices and flexible service requests for users.
We have various publications out on both of these topics. The Future IT Paradigm is covered extensively in the KuppingerCole report “Scenario: Understanding IT Service and Security Management”. The model described therein, now called the “Future IT Paradigm”, splits IT into three levels, with business service management on top, the management of services, information and security in the middle, and IT service production at the bottom. BMC MyIT fits well to the upper layer in this model. Another document worth reading when looking at that model is the KuppingerCole report “Scenario: The Future of IT Organizations”. That document describes how IT organizations have to adapt to the fundamental changes I have listed above.
The second area, Assignment Management, is covered in a whitepaper I wrote a while ago in which I outline the basic concept. It also is one of the investment areas for CIOs we have identified in our CIO GPS.
As mentioned, BMC MyIT aligns well with these concepts, by moving IT closer to the user. Instead of relying on IT-centric Service Catalogs and, in general, an IT-centric view, it is about translating this view into a user-centric perspective and making it accessible for everyday use by all users in a simple way.
BMC seems to push this strategic approach. Yesterday they announced BMC AppZone, based on an acquired technology, through the purchase of Partnerpedia. That allows companies to implement company app stores that again can be integrated with BMC MyIT. Such a move, i.e. allowing users to shop for apps when looking for IT services as a business user, is just so logical you wonder why it has been so long it reaching reality.
It will be interesting to see how BMC further executes on their strategy around AppZone and MyIT. I see a massive potential for both upselling to their existing customers (and leveraging existing investments) and approaching new customers. The biggest challenge as of now is that while the solution is designed to be ITSM agnostic, the standard integration of MyIT is currently limited to existing BMC backend infrastructure. Providing more standard integrations and simple, flexible interfaces for integration will be the critical success factor. If BMC solves that, this might enable them to be not only an IT Service Management, but also a Business Service Management leader.
08.08.2013 by Martin Kuppinger
I’m aware that this is a somewhat tangential post, as there is no relationship to our KuppingerCole topic of Mobile Security, but clearly it fits into the theme of the Computing Troika, i.e. the changes in Business and IT due to the impact of Cloud, Mobile, and Social Computing. However, the main purpose is to share some of my experiences with the Microsoft Surface RT I’ve been using for quite a while now.
I just upgraded to the Windows 8.1 Preview, which is a significant step forward for a simple reason: It includes Microsoft Outlook and I do not need to rely on either the Outlook Web App or the standard Mail app anymore. I will come back to this later.
What I do not like that much with my Surface is the angular design, in contrast to the smoother curves of an Apple iPad. However, my iPad has made it to the living room and is the device of choice for my wife now. I switched to the Surface due to the fact that the iPad was far too limited for my requirements in daily business use, especially when travelling. I need a strong tool for email, calendar, and task management. And I need the ability to not only read Word or PowerPoint documents in the proper formats but also to edit them, including the support for comments and the “track changes” mode of Word. I can do all that with my notebook, but when travelling this is a rather large device to lug around. Tablets are good when I know that I mainly will work on emails plus some other light work, or when I might want to use the Amazon Kindle app. Reading books on my notebook (one of these new convertibles) works, but the device is too heavyweight to really be convenient. So, in other words: I have been looking for a tablet that still provides my work environment. The “big” Surface Pro was not my choice due to its weight and height. I opted for the RT version.
This worked quite well for me. I like some of the features, such as having a classical Windows experience if I like and need to have it, or the USB port. This is so convenient. I remember back at EIC I had to quickly provide a presentation to the technicians. I just put in a USB stick and copied it from Microsoft SkyDrive. I can’t do that with an iPad. Dave Kearns sat beside me and just said: “Why doesn’t Microsoft promote that feature?” The other purpose I use the USB port for is attaching a mouse when I have to work more intensively. It just works.
I also like the fact that the device knows the distinction between a device and a user. This is an important security feature, especially for enterprise deployments – just think about machine and user certificates. I do not need it mandatorily, but it makes sense – and it even makes sense in the living room sometimes, if for instance the children should have limited access.
I feel comfortable with the screen resolution etc. The battery also lasts sufficiently long. Thus, there is – from my perspective – little need to switch to the Pro version of the Surface.
However, there are also some challenges. I do not have 3G support in the device. I solved this by using a mobile WiFi/3G router. This is very convenient, because it works for all of my mobile devices. I also miss one (but only one) app for Windows RT, which is the “Bahn” app provided by the German railway. I have it on my Windows Phone, but not yet for RT.
And then there has been the mail app. This clearly is not the best piece of software Microsoft ever created – it is closer to being one of the worst. After an update it just failed, because my folder structures on Office 365 are too complex for the app. But that has changed now: I have Outlook 2013 now, after upgrading to Windows 8.1 Preview. So I have a tablet with (close to) full Office 2013 capabilities, which makes a great tool for business travel and vacation.
Having been asked by several people about my experience with Microsoft Surface now (being one of the still rare users, obviously), I decided to share that experience in my blog.
05.08.2013 by Martin Kuppinger
When looking at the recent security news, there is one predominant theme: The NSA surveillance disclosure by Edward Snowden. There is some more news, but little “breaking news”. We might count the news about the SIM card flaw, however this seems to be less severe in reality than it was reported at first.
I will not comment much on the NSA issue. Both Dave Kearns and me here and here have touched on this topic. There are a lot of political discussions going on, with some accusing others of not telling the (whole) truth about what they knew. Interestingly, here in Germany the opposition is accusing the current government, even though they were in the government some years ago, thus being well aware of what has been going on at least since 2001. Clearly, this is not a topic for election campaigns and at least until now, it does not seem to be working out as such for the current opposition.
In addition, the reaction of Apple, Google, Microsoft and others did not surprise me. They are asking the US government to unveil more information about when they were urged to provide information to the NSA. That fits to what I have said from the very beginning: The entire thing is a business challenge, especially for US Cloud providers. Thus, they will create (some) political pressure. On the other hand: As long as there are no real alternatives to US-based Cloud services, not much will change. Maybe the shift from on-premise to the Cloud will slow down. However, over time the commotion will fizzle out.
Facebook usage in schools
Another news item that did not gain much attention is from Baden-Württemberg, the southwestern part of Germany I live in. The government of Baden-Württemberg has forbidden the use of Facebook for communication between teachers and their scholars. In some schools, Facebook has been used to communicate about homework and the results. However, this communication might include privacy-relevant contents. In addition, using Facebook mandatorily as a communications tool would force scholars into this social network. Thus, according to the order of the government of Baden-Württemberg (and in accord with the German privacy regulations), it is not allowed. As I’ve mentioned, there has been only little discussion in public about that – either the use has been rather limited or the decision has been widely accepted.
Teaching computer science in schools?
When talking about schools, there has been another news item. The German BITMi (Bundesverband IT-Mittelstand e.V.), the association of medium-sized IT businesses, demands that computer science becomes a required subject in German schools, starting rather early. Currently, it is optional in many schools and regions, and taught as a separate subject only in few grades, mainly in the higher grades. However, it is integral part of several courses in virtually all schools. Recently, Hamburg has decided to reduce the time spent on computer science.
There is some discussion about whether scholars really need to learn coding – which is part of Informatics as a separate subject, while the integral part focuses more on core competencies in using computers, the Internet, word processors, spreadsheets, etc. I think this can be discussed. However, I’d like to see some thorough education on IT security in schools, so that scholars understand this critical subject far better than they typically do today.
23.07.2013 by Martin Kuppinger
Access Intelligence, sometimes also called Identity and Access Intelligence (IAI), is one of the hype topics in the Identity and Access Management (IAM) market. Some vendors try to position this as an entirely new market segment, while others understand this as part of Access Governance (or Identity and Access Governance, IAG).
The first question is what defines IAI. From my perspective there are two major capabilities required to call a feature IAI:
- It must use advanced analytical techniques that allow for a flexible combination and analysis of complex, large sets of data.
- It must support the analysis not only of historical and current access entitlements, but also of access information in context and based on actual use, ideally in run-time.
The first requirement is tightly related to the second one. IAI clearly cannot just rely on traditional reporting mechanisms. Analyzing more data and working with more complex data models will require other technologies, specifically Business Intelligence/Analytics and Big Data technologies.
The second requirement extends the current reach of Identity and Access Governance. IAG traditionally focuses on the comparison of as-is and to-be information about access entitlements in various systems. It also provides reporting capabilities on the current state of these entitlements, including information, for example, about high risk accounts etc.
IAI goes far beyond that, though. It should also enable analysis of the actual use of data, not only of the entitlements. Which documents have been used based on which entitlements? Is there high-risk information people try to access without sufficient entitlements? This analysis is based on information from various systems such as User Activity Monitoring (UAM), server log files, DLP (Data Leakage Prevention) systems, etc. It also can provide information back to other solutions. Access Intelligence thus becomes an important element in Information Stewardship.
IAI helps in moving from a static view to a dynamic view, especially once it supports real-time analytics. One could argue that this leads to an IAM version of SIEM tools (Security Information and Event Management). I’d rather say that it goes beyond that, because it combines IAG with IAI.
Identity and Access Analytics is just a logical extension and part of IAG tools. It allows for better governance. Thus, this should not be a separate set of products but become a part of every IAG solution. It is, by the way, only one of the areas where IAG has and will change. In my presentation about “Redefining Access Governance: Going well beyond Recertification” at EIC 2013, I talked about eight areas of advancement for IAG – and I admittedly missed one in that list that I covered in other presentations, which is IAG for Cloud Services. The video recording of the session is available online.
More information about the current state of the IAG market is available in the KuppingerCole Leadership Compass on Access Governance.
17.07.2013 by Martin Kuppinger
Last week I did a webinar concerning the recent news about secret/intelligence services such as the NSA and their activities, e.g. PRISM and others. This is not really news, but the broad and intense public discussion about this is new. In that context, many organizations have raised the question of whether they can still rely on Cloud Computing or whether they would be better off stopping their Cloud initiatives. Businesses raise this question especially as regards the risk of industrial espionage in cloud space – something that is not proven, but appears to be a risk from the perspective of many businesses.
The main points I made are that
- there is a risk in Cloud Computing, but we should not underestimate the risks of attacks against on-premise environments;
- encryption across the entire information lifecycle is a key element in information security especially for Cloud Computing;
- businesses need to understand the information risks to decide about what to put in the Cloud and what not, but also to evaluate the protection requirements for different information.
The entire webinar has been recorded and is available for replay. It is in German.
The attendees raised a large number of questions that I could not fully answer in the remaining time at the end of the webinar. Thus, I want to address some of these questions now.
Are there specific Cloud encryption algorithms, how secure are they, and are they already in use?
One question has been about encryption approaches for Cloud Computing and their security. In fact, there are several proven strong encryption methods out there. Most of the algorithms have been published. Clearly, there is a risk of backdoors in the installations; however, this should not be overestimated. Backdoors that are not easily available to the surveillants are not of interest to them.
There are no specific algorithms for the Cloud, which makes sense for two reasons. One is that there are several well-established and proven encryption methods already available. Another is that there is no sense in doing IT for on-premise and the Cloud separately, given that most environments are hybrid.
So it is all about applying existing encryption methods and algorithms, although the solutions might vary and range from secure email over transport security such as TLS to secure folders or simply encrypted files that are held on Cloud services.
Are there encryption approaches where the encryption is managed by the Cloud Service Provider, but all keys are on-premise at the customer?
The simple answer here is: No. The CSP would need access to the key for encryption, thus he cannot do this without access to the key. Once he has access he potentially can store the key or pass it to someone else.
How do we know that S/MIME implementations of vendors do not contain backdoors for the NSA, for instance via “key escrow”?
We do not know, for “closed source”. However, unless the vendor has access to keys, there cannot be any key escrow. Thus, that risk applies to Cloud Services, where keys are stored at the CSP. But as long as the keys are managed on-premise, this does not work.
How can I automatically support employees in my organization to better protect tools such as Salesforce.com Chatter or Microsoft SharePoint? These tools are rather unprotected by default. Can I use them at all in the manufacturing industry?
As with any tools, both on-premise and Cloud, decisions about procurement and implementation should take security into account. The use of Cloud tools favored by the business might require mitigating controls to deal with information risk in an appropriate way. More information on this is available in the replay of this webinar.
In general, organizations should implement the concept of Information Stewardship. You will find extensive information on that concept at our website and in the EIC presentations and videos.
I would not say that these tools could not be used at all. However, it is important to understand what information is stored or communicated using these tools and configure them accordingly – or restrict their use. Thus, it requires a thorough understanding of information classification and risk and well-defined policies, before these tools are used.
Isn’t there a risk in using encryption technologies to bypass security?
Clearly, there is some risk. S/MIME or PGP might be used to forward information to unauthorized recipients. It comes as no surprise that the Tor network is frequently used for illegal purposes. This is about finding the right balance.
How can I enforce confidentiality for internal communication?
Technically, many approaches for digitally signing email and documents are available, as well as encryption. Lotus Notes/Domino is one of the systems that has supported this for many, many years. S/MIME is a standard that supports this for email. Enterprise Rights Management technologies such as Microsoft RMS (Rights Management Services) can do that for documents. So there are various approaches available, many of these are rather mature. Thus, it is about re-evaluating the information risks and identifying an adequate set of technologies to help mitigating these risks, based on well-defined policies.
It is not a question of technology availability. It is a question of setting the organizational framework (Information Stewardship) and investing in security. With all the new incidents – and this goes beyond nation-state attacks and suspected industrial espionage to all the cyber-attacks of today – the equation changes. The risk is far higher today, thus investing in information security is increasingly an economic imperative for businesses.
What about article 10 of the German constitution?
The German constitution (“Grundgesetz”) defines on one hand that the privacy of correspondence, posts, and telecommunications are inviolable. On the other hand, the second part of article 10 states that the law might allow exceptions, especially for protecting the free democratic system of Germany or the state of Germany. That gives the government some freedom – so we should not be too surprised if we learn in future about the activities of the German intelligence/secret services.
Interestingly, one of the participants pointed back to the cover story of the German news magazine “Der Spiegel” from week 8 of 1989. That story was about Echelon and talked about the fact that industrial espionage was already happening. However, there was little attention to that story back then. Things have changed now.
Still, as I have said in the webinar: there is not that much news, and there are even less proven facts. Companies should just assume that their information is at risk and act accordingly, both in on-premise environments and the Cloud.
If you need our advice on that, just contact my colleagues at email@example.com and listen to upcoming KuppingerCole webinars on that topic.