09.08.2013 by Martin Kuppinger
Some days ago I had a briefing with BMC Software on their new MyIT offering. MyIT is a self-service approach that enables end users to request services. It focuses on the user experience and tries to close the gap between the IT-centric view of services and the view business users have.
This aligns well with two areas of KuppingerCole research:
- One is the Future IT Paradigm by KuppingerCole, our definition of how we expect and recommend that IT organizations change in order to be able to deal with the changes in IT itself – the change from on-premise IT to hybrid models and an increasing portion of Cloud Computing and the overall influence of the Computing Troika (Cloud, Mobile, Social Computing).
- The other is what we call Assignment Management, i.e. an approach that allows managing not only access rights (as in Access Governance) but all types of assignments, including physical devices and flexible service requests for users.
We have various publications out on both of these topics. The Future IT Paradigm is covered extensively in the KuppingerCole report “Scenario: Understanding IT Service and Security Management”. The model described therein, now called the “Future IT Paradigm”, splits IT into three levels, with business service management on top, the management of services, information and security in the middle, and IT service production at the bottom. BMC MyIT fits well to the upper layer in this model. Another document worth reading when looking at that model is the KuppingerCole report “Scenario: The Future of IT Organizations”. That document describes how IT organizations have to adapt to the fundamental changes I have listed above.
The second area, Assignment Management, is covered in a whitepaper I wrote a while ago in which I outline the basic concept. It also is one of the investment areas for CIOs we have identified in our CIO GPS.
As mentioned, BMC MyIT aligns well with these concepts, by moving IT closer to the user. Instead of relying on IT-centric Service Catalogs and, in general, an IT-centric view, it is about translating this view into a user-centric perspective and making it accessible for everyday use by all users in a simple way.
BMC seems to push this strategic approach. Yesterday they announced BMC AppZone, based on an acquired technology, through the purchase of Partnerpedia. That allows companies to implement company app stores that again can be integrated with BMC MyIT. Such a move, i.e. allowing users to shop for apps when looking for IT services as a business user, is just so logical you wonder why it has been so long it reaching reality.
It will be interesting to see how BMC further executes on their strategy around AppZone and MyIT. I see a massive potential for both upselling to their existing customers (and leveraging existing investments) and approaching new customers. The biggest challenge as of now is that while the solution is designed to be ITSM agnostic, the standard integration of MyIT is currently limited to existing BMC backend infrastructure. Providing more standard integrations and simple, flexible interfaces for integration will be the critical success factor. If BMC solves that, this might enable them to be not only an IT Service Management, but also a Business Service Management leader.
08.08.2013 by Martin Kuppinger
I’m aware that this is a somewhat tangential post, as there is no relationship to our KuppingerCole topic of Mobile Security, but clearly it fits into the theme of the Computing Troika, i.e. the changes in Business and IT due to the impact of Cloud, Mobile, and Social Computing. However, the main purpose is to share some of my experiences with the Microsoft Surface RT I’ve been using for quite a while now.
I just upgraded to the Windows 8.1 Preview, which is a significant step forward for a simple reason: It includes Microsoft Outlook and I do not need to rely on either the Outlook Web App or the standard Mail app anymore. I will come back to this later.
What I do not like that much with my Surface is the angular design, in contrast to the smoother curves of an Apple iPad. However, my iPad has made it to the living room and is the device of choice for my wife now. I switched to the Surface due to the fact that the iPad was far too limited for my requirements in daily business use, especially when travelling. I need a strong tool for email, calendar, and task management. And I need the ability to not only read Word or PowerPoint documents in the proper formats but also to edit them, including the support for comments and the “track changes” mode of Word. I can do all that with my notebook, but when travelling this is a rather large device to lug around. Tablets are good when I know that I mainly will work on emails plus some other light work, or when I might want to use the Amazon Kindle app. Reading books on my notebook (one of these new convertibles) works, but the device is too heavyweight to really be convenient. So, in other words: I have been looking for a tablet that still provides my work environment. The “big” Surface Pro was not my choice due to its weight and height. I opted for the RT version.
This worked quite well for me. I like some of the features, such as having a classical Windows experience if I like and need to have it, or the USB port. This is so convenient. I remember back at EIC I had to quickly provide a presentation to the technicians. I just put in a USB stick and copied it from Microsoft SkyDrive. I can’t do that with an iPad. Dave Kearns sat beside me and just said: “Why doesn’t Microsoft promote that feature?” The other purpose I use the USB port for is attaching a mouse when I have to work more intensively. It just works.
I also like the fact that the device knows the distinction between a device and a user. This is an important security feature, especially for enterprise deployments – just think about machine and user certificates. I do not need it mandatorily, but it makes sense – and it even makes sense in the living room sometimes, if for instance the children should have limited access.
I feel comfortable with the screen resolution etc. The battery also lasts sufficiently long. Thus, there is – from my perspective – little need to switch to the Pro version of the Surface.
However, there are also some challenges. I do not have 3G support in the device. I solved this by using a mobile WiFi/3G router. This is very convenient, because it works for all of my mobile devices. I also miss one (but only one) app for Windows RT, which is the “Bahn” app provided by the German railway. I have it on my Windows Phone, but not yet for RT.
And then there has been the mail app. This clearly is not the best piece of software Microsoft ever created – it is closer to being one of the worst. After an update it just failed, because my folder structures on Office 365 are too complex for the app. But that has changed now: I have Outlook 2013 now, after upgrading to Windows 8.1 Preview. So I have a tablet with (close to) full Office 2013 capabilities, which makes a great tool for business travel and vacation.
Having been asked by several people about my experience with Microsoft Surface now (being one of the still rare users, obviously), I decided to share that experience in my blog.
05.08.2013 by Martin Kuppinger
When looking at the recent security news, there is one predominant theme: The NSA surveillance disclosure by Edward Snowden. There is some more news, but little “breaking news”. We might count the news about the SIM card flaw, however this seems to be less severe in reality than it was reported at first.
I will not comment much on the NSA issue. Both Dave Kearns and me here and here have touched on this topic. There are a lot of political discussions going on, with some accusing others of not telling the (whole) truth about what they knew. Interestingly, here in Germany the opposition is accusing the current government, even though they were in the government some years ago, thus being well aware of what has been going on at least since 2001. Clearly, this is not a topic for election campaigns and at least until now, it does not seem to be working out as such for the current opposition.
In addition, the reaction of Apple, Google, Microsoft and others did not surprise me. They are asking the US government to unveil more information about when they were urged to provide information to the NSA. That fits to what I have said from the very beginning: The entire thing is a business challenge, especially for US Cloud providers. Thus, they will create (some) political pressure. On the other hand: As long as there are no real alternatives to US-based Cloud services, not much will change. Maybe the shift from on-premise to the Cloud will slow down. However, over time the commotion will fizzle out.
Facebook usage in schools
Another news item that did not gain much attention is from Baden-Württemberg, the southwestern part of Germany I live in. The government of Baden-Württemberg has forbidden the use of Facebook for communication between teachers and their scholars. In some schools, Facebook has been used to communicate about homework and the results. However, this communication might include privacy-relevant contents. In addition, using Facebook mandatorily as a communications tool would force scholars into this social network. Thus, according to the order of the government of Baden-Württemberg (and in accord with the German privacy regulations), it is not allowed. As I’ve mentioned, there has been only little discussion in public about that – either the use has been rather limited or the decision has been widely accepted.
Teaching computer science in schools?
When talking about schools, there has been another news item. The German BITMi (Bundesverband IT-Mittelstand e.V.), the association of medium-sized IT businesses, demands that computer science becomes a required subject in German schools, starting rather early. Currently, it is optional in many schools and regions, and taught as a separate subject only in few grades, mainly in the higher grades. However, it is integral part of several courses in virtually all schools. Recently, Hamburg has decided to reduce the time spent on computer science.
There is some discussion about whether scholars really need to learn coding – which is part of Informatics as a separate subject, while the integral part focuses more on core competencies in using computers, the Internet, word processors, spreadsheets, etc. I think this can be discussed. However, I’d like to see some thorough education on IT security in schools, so that scholars understand this critical subject far better than they typically do today.
23.07.2013 by Martin Kuppinger
Access Intelligence, sometimes also called Identity and Access Intelligence (IAI), is one of the hype topics in the Identity and Access Management (IAM) market. Some vendors try to position this as an entirely new market segment, while others understand this as part of Access Governance (or Identity and Access Governance, IAG).
The first question is what defines IAI. From my perspective there are two major capabilities required to call a feature IAI:
- It must use advanced analytical techniques that allow for a flexible combination and analysis of complex, large sets of data.
- It must support the analysis not only of historical and current access entitlements, but also of access information in context and based on actual use, ideally in run-time.
The first requirement is tightly related to the second one. IAI clearly cannot just rely on traditional reporting mechanisms. Analyzing more data and working with more complex data models will require other technologies, specifically Business Intelligence/Analytics and Big Data technologies.
The second requirement extends the current reach of Identity and Access Governance. IAG traditionally focuses on the comparison of as-is and to-be information about access entitlements in various systems. It also provides reporting capabilities on the current state of these entitlements, including information, for example, about high risk accounts etc.
IAI goes far beyond that, though. It should also enable analysis of the actual use of data, not only of the entitlements. Which documents have been used based on which entitlements? Is there high-risk information people try to access without sufficient entitlements? This analysis is based on information from various systems such as User Activity Monitoring (UAM), server log files, DLP (Data Leakage Prevention) systems, etc. It also can provide information back to other solutions. Access Intelligence thus becomes an important element in Information Stewardship.
IAI helps in moving from a static view to a dynamic view, especially once it supports real-time analytics. One could argue that this leads to an IAM version of SIEM tools (Security Information and Event Management). I’d rather say that it goes beyond that, because it combines IAG with IAI.
Identity and Access Analytics is just a logical extension and part of IAG tools. It allows for better governance. Thus, this should not be a separate set of products but become a part of every IAG solution. It is, by the way, only one of the areas where IAG has and will change. In my presentation about “Redefining Access Governance: Going well beyond Recertification” at EIC 2013, I talked about eight areas of advancement for IAG – and I admittedly missed one in that list that I covered in other presentations, which is IAG for Cloud Services. The video recording of the session is available online.
More information about the current state of the IAG market is available in the KuppingerCole Leadership Compass on Access Governance.
17.07.2013 by Martin Kuppinger
Last week I did a webinar concerning the recent news about secret/intelligence services such as the NSA and their activities, e.g. PRISM and others. This is not really news, but the broad and intense public discussion about this is new. In that context, many organizations have raised the question of whether they can still rely on Cloud Computing or whether they would be better off stopping their Cloud initiatives. Businesses raise this question especially as regards the risk of industrial espionage in cloud space – something that is not proven, but appears to be a risk from the perspective of many businesses.
The main points I made are that
- there is a risk in Cloud Computing, but we should not underestimate the risks of attacks against on-premise environments;
- encryption across the entire information lifecycle is a key element in information security especially for Cloud Computing;
- businesses need to understand the information risks to decide about what to put in the Cloud and what not, but also to evaluate the protection requirements for different information.
The entire webinar has been recorded and is available for replay. It is in German.
The attendees raised a large number of questions that I could not fully answer in the remaining time at the end of the webinar. Thus, I want to address some of these questions now.
Are there specific Cloud encryption algorithms, how secure are they, and are they already in use?
One question has been about encryption approaches for Cloud Computing and their security. In fact, there are several proven strong encryption methods out there. Most of the algorithms have been published. Clearly, there is a risk of backdoors in the installations; however, this should not be overestimated. Backdoors that are not easily available to the surveillants are not of interest to them.
There are no specific algorithms for the Cloud, which makes sense for two reasons. One is that there are several well-established and proven encryption methods already available. Another is that there is no sense in doing IT for on-premise and the Cloud separately, given that most environments are hybrid.
So it is all about applying existing encryption methods and algorithms, although the solutions might vary and range from secure email over transport security such as TLS to secure folders or simply encrypted files that are held on Cloud services.
Are there encryption approaches where the encryption is managed by the Cloud Service Provider, but all keys are on-premise at the customer?
The simple answer here is: No. The CSP would need access to the key for encryption, thus he cannot do this without access to the key. Once he has access he potentially can store the key or pass it to someone else.
How do we know that S/MIME implementations of vendors do not contain backdoors for the NSA, for instance via “key escrow”?
We do not know, for “closed source”. However, unless the vendor has access to keys, there cannot be any key escrow. Thus, that risk applies to Cloud Services, where keys are stored at the CSP. But as long as the keys are managed on-premise, this does not work.
How can I automatically support employees in my organization to better protect tools such as Salesforce.com Chatter or Microsoft SharePoint? These tools are rather unprotected by default. Can I use them at all in the manufacturing industry?
As with any tools, both on-premise and Cloud, decisions about procurement and implementation should take security into account. The use of Cloud tools favored by the business might require mitigating controls to deal with information risk in an appropriate way. More information on this is available in the replay of this webinar.
In general, organizations should implement the concept of Information Stewardship. You will find extensive information on that concept at our website and in the EIC presentations and videos.
I would not say that these tools could not be used at all. However, it is important to understand what information is stored or communicated using these tools and configure them accordingly – or restrict their use. Thus, it requires a thorough understanding of information classification and risk and well-defined policies, before these tools are used.
Isn’t there a risk in using encryption technologies to bypass security?
Clearly, there is some risk. S/MIME or PGP might be used to forward information to unauthorized recipients. It comes as no surprise that the Tor network is frequently used for illegal purposes. This is about finding the right balance.
How can I enforce confidentiality for internal communication?
Technically, many approaches for digitally signing email and documents are available, as well as encryption. Lotus Notes/Domino is one of the systems that has supported this for many, many years. S/MIME is a standard that supports this for email. Enterprise Rights Management technologies such as Microsoft RMS (Rights Management Services) can do that for documents. So there are various approaches available, many of these are rather mature. Thus, it is about re-evaluating the information risks and identifying an adequate set of technologies to help mitigating these risks, based on well-defined policies.
It is not a question of technology availability. It is a question of setting the organizational framework (Information Stewardship) and investing in security. With all the new incidents – and this goes beyond nation-state attacks and suspected industrial espionage to all the cyber-attacks of today – the equation changes. The risk is far higher today, thus investing in information security is increasingly an economic imperative for businesses.
What about article 10 of the German constitution?
The German constitution (“Grundgesetz”) defines on one hand that the privacy of correspondence, posts, and telecommunications are inviolable. On the other hand, the second part of article 10 states that the law might allow exceptions, especially for protecting the free democratic system of Germany or the state of Germany. That gives the government some freedom – so we should not be too surprised if we learn in future about the activities of the German intelligence/secret services.
Interestingly, one of the participants pointed back to the cover story of the German news magazine “Der Spiegel” from week 8 of 1989. That story was about Echelon and talked about the fact that industrial espionage was already happening. However, there was little attention to that story back then. Things have changed now.
Still, as I have said in the webinar: there is not that much news, and there are even less proven facts. Companies should just assume that their information is at risk and act accordingly, both in on-premise environments and the Cloud.
If you need our advice on that, just contact my colleagues at email@example.com and listen to upcoming KuppingerCole webinars on that topic.
08.07.2013 by Martin Kuppinger
Today RSA Security, a part of EMC [officially it’s “RSA, The Security Division of EMC”], has officially announced the acquisition of Aveksa, a company based in Waltham, MA. The deal closed on July 1st, 2013. Aveksa is a leading provider in the area of Identity and Access Governance (IAG), as depicted in our KuppingerCole Leadership Compass on Access Governance. Aveksa will continue to operate under the current leadership of its CEO Vick Viren Vaishnavi and will be part of the RSA Identity Trust Management business. Aveksa currently has approximately 175 employees.
One might ask why RSA did not enter the “core IAM” business earlier, when it was mainly Identity Provisioning, but for some years now that core has been complemented by and shifted towards IAG. Many people had expected such a move from RSA, given that they deliver in several other areas of the IAM market, including Strong Authentication, Versatile Authentication, Access Management and Federation. With the Aveksa acquisition, RSA definitely has made a move in that direction.
Instead of focusing on the traditional Identity Provisioning market, they focused on the emerging IAG market segment. Aveksa delivers some built-in provisioning capabilities but clearly does not have the breadth of connectors that the key-players in that market segment provide. However, with IAG increasingly becoming an integration layer for existing “legacy” provisioning tools, Aveksa has emerged as a major player. By adding some provisioning capabilities, customer requirements can be typically covered. Aveksa builds here on an enterprise-grade approach based on an Enterprise Service Bus (ESB) as the transport layer. Support for manual fulfillment is another important approach. Simply said: The number of connectors is not the key decision guage. The main measure is the support for a structured and user-friendly approach to Access Request Management, Recertification, and Access Analytics, including the underlying Enterprise Role Management.
However, the real potential of that acquisition is not that RSA as of now can provide a solution for IAG. The potential is in combining the capabilities of both companies to open new grounds for Access Governance, beyond that which is common today. In my presentation about “Redefining Access Governance: Going well beyond Recertification” at EIC 2013, I talked about eight areas of advancement for IAG – and I admittedly missed one in that list that I covered in other presentations, which is IAG for Cloud Services. The video recording of the session is available online.
There is much room for improvement. Aveksa is a strong player in IAG. RSA adds not only Access Management and Federation, but strong and versatile authentication. And there is RSA Archer, an Enterprise GRC solution. The combination of RSA and Aveksa is, by the way, the only one in the market where strong authentication and IAG come together in one vendor. That will allow creating Access Governance for risk- and context-based authentication and authorization, the next big trend in IT. My colleague Dave Kearns and myself both talked about that topic at EIC 2013 and Dave will do a Webinar on this topics later this month. Governing the rules for such environments and adding analytics for that is a field of high interest. And this is clearly not the only area where both companies can leverage synergies, given the tight relationship between cyber-attacks and Access Management and Analytics.
RSA and Aveksa have started talking about some promising ideas, even in the context of EMC. EMC can add Big Data capabilities that allow moving IAG to the next level when it comes to analytics. And not only that: Combining authentication information, external threat intelligence, risk analytics etc. – all in the combined portfolio – might lead to game-changing offerings.
So there is strong potential. Let’s see whether, how, and when RSA delivers on this potential. Still, when looking at acquisitions the other important question is: What does it mean to existing customers? The good thing is that there is virtually no overlap between the current product portfolios of these two companies. Thus, there are no products that are likely to be discontinued. In fact, for RSA customers there is really the chance for new and advanced offerings. For existing Aveksa customers, the acquisition means that their supplier right now is not a niche player anymore but part of a far larger vendor, with substantial financial backing and a far broader portfolio. Thus, there is a strong potential that this turns out to be positive for existing Aveksa customers.
But as always: Only time will tell.
25.06.2013 by Martin Kuppinger
The big topic clearly is what Edward Snowden unveiled: The PRISM program and some other nation-state activities on the Internet. In fact, this did not really come as a surprise. There have been discussions and rumors about such activities (and others) for many, many years. Maybe it helps driving forward risk- and information-centric security concepts and end-to-end-security instead of investing in point solutions. I will cover that topic in another blog post soon.
Facebook again struggles with privacy
However, besides PRISM etc. there have been various other security-related incidents and news. Facebook inadvertently shared eMail addresses and phone numbers of 6 million users with other members. That also comes as no surprise, given that Facebook always has been brilliant in weak security and privacy architectures and implementation.
Google under regulatory pressure – again
Google sees itself confronted with new pressure from regulators. The U.K. ICO (Information Commissioner Officer) has placed a legal requirement on Google to delete any data the company still has related to its Street View snooping.
In addition, the French regulator CNIL (Commission nationale de l’informatique et des libertés) ordered Google to change its privacy policies. Unfortunately, the fines are ridiculously low, starting at 150,000 €. Obviously, the plans of the EU to massively increase the potential fines and relate them to an organization’s annual revenue would put far more pressure on companies such as Google.
Old bugs appear again
Sometimes, security weaknesses appear to have a long lifetime. A bug that had been fixed by Adobe back in 2011 appeared again in the Adobe Flash Plug-In for Google Chrome browser. Adobe informed the public that Google is working on a patch for that bug.
And again plug-ins
Plug-Ins in general appear to be a potential weakness when it comes to security. The German BSI, the federal department for IT security, analyzed systems such as WordPress, Joomla!, Typo3, etc. from a security perspective. Most identified security weaknesses are related to plug-ins and add-ons, sometimes up to 95%. Thus, you should be (even more) careful when you start enhancing such systems.
Besides these news items, there have been many others. One of the positive reports has been that Microsoft and the FBI recently shut down a massive Citadel botnet. A negative one has been another issue in the DNS system where a human error led to the mis-routing of thousands of domains. Maybe it is time to start developing a successor to the stone-aged DNS system?
In general, the situation in security appears to remain rather unchanged. A lot of security bugs, incidents caused by human misbehavior, nation-state attacks and other activities, and the ongoing struggle around privacy, including some massive data leaks.
13.05.2013 by Martin Kuppinger
Since my colleague Craig Burton has declared that SAML is dead, it seems to be in vogue among analysts to take the role of the public medical officer and to diagnose the death of standards or even IAM (Identity and Access Management) in general. Admittedly, the latter case was not about diagnosing the death but proposing to kill IAM, but that does not change much. The newest in this series of dead bodies is XACML, according to another Industry Analyst. So we are surrounded by dead corpses now, or maybe by living zombies. But is that really true? My colleague Craig Burton titled his blog – for a very good reason – “SAML is Dead! Long Live SAML!” That is fundamentally different from saying “XACML is dead”.
There are a lot of good answers from experts such as Ian Glazer, Gerry Gebel (OK, he might be a little biased being the President of Axiomatics Americas), or Danny Thorpe.
I am clearly not suspicious being the enthusiastic XACML evangelist wearing blinders. Just ask some of the Axiomatics guys – we had many controversial discussions over the years. However, for me it is clear that neither Dynamic Authorization Management in general nor XACML in particular are dead.
What puzzled me most in this blog post was that part of the initial sentence:
XACML … is largely dead or will be transformed into access control
OK, “access control”. XACML is access control. Access control is everything around authentication and authorization. So what does this mean? I just do not understand that sentence, sorry. XACML is a part of the overall Access Control story.
From my perspective, the two most important concepts within access control are Dynamic Authorization Management and Risk-/Context-Based Access Control (i.e. both Authentication and Authorization). The latter only will work with Dynamic Authorization Management in place. When we know about the context and the risk and make authorization decisions based on that, then we need systems that externalize authorization and rely on rules that can take the context into account.
The challenge with Dynamic Authorization Management, i.e. technologies implemented in a variety of products such as the Axiomatics Policy Server, the Oracle Entitlements Server, the IBM Security Policy Manager, Quest APS, and many others, is that it requires changes in both application code and the mindset of software developers and architects. That is a long journey. On the other hand we see some increase in acceptance and use of such technologies. Notably, Dynamic Authorization Management is not new. You will find such concepts dating back to the mid ‘70s in mainframe environments, and IBM’s good old RACF can be consider an early example for that.
You still can argue that Dynamic Authorization Management is alive but XACML as the most important standard around it is dead. There are good arguments against that, and I will not repeat what the others mentioned above have said. You might discuss where to use XACML and where to rely on proprietary technology. However, do you really want to lock in your entire application landscape into a proprietary Dynamic Authorization Management technology of a single vendor? That would be a nightmare. You need to isolate your applications from the Dynamic Authorization Management system in use, and a standard helps in doing that. Just think about being locked into proprietary interfaces for all of your applications using a specific Dynamic Authorization Management system for the next 30, 40 or more years.
XACML even is the better choice for COTS applications. They can rely on a standard, instead of every vendor building proprietary connectors. Most vendors will do that for Microsoft SharePoint, because SharePoint is so important. But that is the exception, not the rule. And deducing from the fact that vendors support SharePoint with proprietary interfaces (instead of using XACML) that XACML is dead is just a wrong deduction. The problem in that case is not XACML but the SharePoint security model that clearly is not the best I have ever seen (to say the least). XACML is of value. Standards are of value. And I believe you would need much better reasons to diagnose the death of standards.
To learn more about the real trends in IAM, IAG, Cloud Security, and many other topics, just visit the EIC 2013 that starts on Tuesday, May 14th.
01.05.2013 by Martin Kuppinger
Last week, Google announced that it has joined the FIDO Alliance. FIDO stands for Fast Identity Online. The alliance was formed in July 2012. The mission is to change the nature of online authentication by providing interoperability among strong authentication devices. The alliance is working on specifications for an open, scalable, interoperable set of mechanisms that allow secure authentication of users to online services without the need for passwords for each of these services. It wishes to become a standard that allows using both existing and future strong authentication devices (those that support the FIDO standard), in an interoperable way.
This is in fact about “versatile authentication” from scratch, enabled in any device. Currently, many organizations are investing in versatile authentication technology that allows them to flexibly change and combine different authentication mechanisms. With FIDO, that could become a standard.
Users can use a choice of different mechanisms for strong authentication, including hardware tokens, embedded hardware such as TPMs (Trusted Platform Modules), biometrics, etc. The website will recognize the devices as “FIDO devices” and enable them. Once a strong authentication device is connected to a site, it can be used the same way it has always been used.
FIDO requires a browser plugin, which is the simple part of the story. It also requires a device-specific module that must be installed to use the “FIDO authenticator”, i.e. the strong authentication device of choice. The website or online service must also support FIDO.
Success of FIDO will depend on two factors. There must be a critical mass of online services supporting FIDO. Given that several large service providers already are members of the FIDO alliance, that might happen. Secondly, there is the need for a critical mass of users that use strong authentication devices with FIDO support. The challenge in that area will be a simple enablement of FIDO through browser-plugins (even better if they are pre-installed) and especially the availability and simple deployment of device-specific modules.
On the other hand there clearly is the question of whether FIDO will gain sufficient support and acceptance amongst the vendors. What will the vendors of strong authentication devices do? What will the vendors of versatile authentication platforms do? And what will the providers of online authentication services do?
From my perspective FIDO could help all of them. It provides the opportunity for “strong authentication for the masses”, for a ubiquitous approach that works for everyone, with flexible choice of strong authentication devices. The providers of Versatile Authentication Platforms can still provide the server-side interfaces, but with more flexibility in supporting different devices. And providers of online authentication services can still act as brokers and service providers – for many online services that will remain the better choice than direct support for FIDO. There might even be services that are brokers for “non-FIDO clients” and act as FIDO clients.
Overall, there is a good potential for the FIDO Alliance, despite the fact that it requires the installation of a client component. I greatly appreciate everything that makes the Internet more secure. I will closely watch the progress of the FIDO Alliance. However, I have seen so many concepts in that area that I would not bet on their success.
30.04.2013 by Martin Kuppinger
On Thursday, I was moderating a panel discussion at infosecurity Europe (InfoSec), the leading UK security fair, which hosts a program of keynotes and panel discussions. My panel was titled “Smarter security spending: Optimising spend without exposing the business”. Panelists were Dragan Pendić, Chief Security Architect, Global Information Management and Security, at Diageo; Michelle Tolmay, Security Officer, ASOS; Cal Judge, Information Security Head, Oxfam; and Graham McKay, CISO, DC Thomson.
We had a very interesting, well-attended session with some interesting questions during the Q+A following the panel discussion. Key take-aways for smarter security spending we came upon during the discussion were
- Common Language
- Big Picture
Getting the users on board was one of the most important themes of the discussion. Without increasing involvement and understanding of people for Information Security, it is hard to get the buy-in and support you need, from both management and the end users. This is an important element within what KuppingerCole calls Information Stewardship.
Involvement of people is tightly related to the need of a common language – talking in business terms instead of tech talk. Information Security is about the I in IT, not primarily the T – business is interested in protecting information, not technology. The latter is just a means to protect information.
For that common language, the concept of “risk” is of central importance. Business thinks in risks. Managers are used to basing their decisions on risk. Mitigating and taking risks is part of their daily job. Risks also help in moving IT from the role of the notorious naysayer to the business enabler. If business requests a service, instead of pointing at all the technical challenges and no-gos, it is better to show some options, their benefits, their cost, and the associated risks. That enables the business to make informed decisions.
Risk, on the other hand, is the foundation for smart spending when investing in Information Technology – the T in IT. Understanding the risk mitigation impact of such technology and the benefit for the business helps in making better decisions. It helps in moving from point solutions and decisions made in “panic mode” after an incident towards structured, well-thought-out decisions based on the best risk/reward ratio (RRR). This always includes understanding the big picture – how do new solutions fit into the bigger picture? Smart spending requires a smart balance between defining and understanding the big, strategic picture and tactical steps towards this that provide the best RRR.
To learn more about that, join us at EIC 2013 – the European Identity and Cloud Conference, Munich, May 14th-17th. Starting with my opening keynote, the topics discussed in that Infosec panel will play an important role throughout the entire conference.