News from the Analyst Summit in London

06.06.2011 by Sachar Paulus

Every Summer, Eskenzi PR organizes the IT security analyst and CISO forum. It basically consists of one-on-one meetings between vendors and analysts and round table discussions between vendors, analysts and end-users, typically CISOs. And the event this year was excellent!

The quality and density of information is quite high, and it allows to grasp trends, both on the vendor as well on the end-user side, quite well in a highly condensed format. So: an ideal opportunity to review a number of technology trends.

Here are a few insights of the event I want to share with my followers. This list is not exhaustive and represents my personal view on the products discussed, and obviously this is not an objective analyst review as it should be. Nevertheless, it might give you some fruit for thought…

  • Regarding Cloud Security, there is always the discussion how to secure the information in the Cloud against the Cloud service provider, in case one might not trust him. Safenet has introduced the concept of pre-boot authentication, well known from Laptop security, to secure virtual machine images in the Cloud. A pretty neat idea – we will see how it will evolve, esp. because it of course uses a proprietary format (as all device encryption software manufacturers do).
  • You don’t know where your devices are? What is part of the standard for iPhones, now comes for all mobile IT devices, including Laptops of any kind: location services, including remote destruction, and even selected data retrieval. And the best: the solution is preinstalled in the BIOS of most manufacturers, so just turn on an the security is there. Great, because very pragmatic, solution. Go and visit Absolute Software’s portfolio.
  • Standard IRM solutions – and for those not reading my blog regularly, I believe this is a necessary technology for a Secure Cloud usage – are missing the identification and classification means for data to be protected, and thus leave the use alone with that mess. Secure Islands, a small innovative vendor from Israel, provides the solution: it re-uses standard IRM, but integrates nicely into e-mail suites, browsers and local programs. The Secure Islands solution really boosts the usage of IRM because of the high simplicity.
  • Knowing where your data is – and who actually accesses it – is an important prerequisite for secure data management and access management in general. A totally different approach than we usually see is the one followed by Varonis, who enable IT people to discover – and track, if necessary – where the data is that people are using. And this across all shares, web content management systems, ftp servers and alike. Monitor who actually accessed a specific folder with insider information in the last 4 weeks? This information is just a few clicks away. Interestingly, most customers are buying the solution not because of security needs, but for optimizing storage concepts and their implementation.
  • It just happened again this morning: a certificate expired, and I had this damned popup saying that I cannot trust a specific web site any more. I cannot really do anything about it and I blame the web site owner for not keeping its certificates up to date. Venafi takes care about this problem, and helps you manage the thousands of certificates and key pairs that are in use in a professional IT environment.

These are only a few of those companies I have seen, and of course there are more, that do a great Job such as Lieberman Software, Imperva of M86Security.

Overall, one might identify a trend: more and more vendors respond to the demand of end-users that preventive controls are nice and if doable and affordable they are the best one can do, but in the meantime it is necessary to manage the insecurity, so a lot of products focus on more transparency and thus helping at least knowing what is going on – right or wrong.


RSA SecurID breach: it had to happen…

21.03.2011 by Sachar Paulus

As you, dear reader, can imagine, the information about the SecurID breach was really shaking the minds of us analysts here – for a long time, we were telling the story that SecurID was the right compromise between security, convenience and manageability – until SMS became so cheap, that they made the first place for cheap, manageable and strong authentication.

There has been said much about the management aspects, whether it will shake the industry (I personally believe, yes, but much slower than some people argue) or what this means for the reputation of the world’s largest strong authentication player. I want to add my few cents on the concept itself.

To do that, I need to go back in time when I was a postdoc, some (ugh, more than 15) years ago. We were working on analyzing the strong authentication landscape, and of course SecurID was already there, with a remarkable footprint in the market. We analyzed a number of technologies, including PKI with different crypto systems, one-way functions for authentication purposes, HMACs and so on, among others, of course, the market leader, SecurID.  But what really made us worry – and remember, it was the time where Europe feared that the U.S. can hear and do everything in our IT-systems – were two observations:

  1. The algorithm of SecurID was kept secret, and there was no way for us post doc researchers to get our hands on the code or even an algorithm and
  2. The fact that there were a number of open, secure and understood algorithms doing basically the same thing.

In fact, soon after our team developed an HMAC based authentication algorithm with Smart Cards and mobile readers that was adopted by a number of German players – which, of course from todays perspective, did not succeed. But back to SecurID – so we wondered why such an important player could sell – technology-wise, and in the eyes of a security designer – such a crap thing so well…

We went on, analyzing it without having our hands on the code, and found in our eyes a serious weakness, that was to our understanding by no means due for keeping the security tight: some information pieces about EVERY user (in fact, about every token) was kept at the site of the customer. And the reason was not user experience, either: because if you loose your token, then you need to go through a re-personalization process, so it was not for that purpose… So why was this necessary? Of course – remember the times – we were imaging a number of more political than technical reasons…

Anyway – it was, and it is, an important weakness of the protocol, since it offers an unnecessary attack vector. Any other clever hacker could have come to that same conclusion. Now with the right motivation, the right customers – there you go!

It simply had to happen one day.


Cloud Security – the market is evolving

06.01.2011 by Sachar Paulus

Winter holiday season is almost over, and business claims its attention back – it was a nice time with family, good food, and so on. But the world didn’t stop, so we had to spend some time to look at a number of products. I would like to mention two here, especially because they help us getting closer to the Secure Cloud.

The first is Novell Cloud Security Service (shortly called NCSS). It is not clear according to todays product categories whether it is a product or a service, and this shows that we need to abstract more and more from this separation when moving into the cloud. Let me describe it by what its main benefits are from my point of view: it allows to run cloud services with the identities of enterprise-managed identity services, and to monitor security related information from an enterprise perspective.

Well, this seems not really interesting, after all, we can all set up a could service and let users authenticate against our company-run LDAP store. But this is different: it allows enterprise users to use GMAIL oder other real open cloud services to use their usual identity store, even with SSO (based on SAML, of course). The effort of integration with the app services is minimal, and identity information never leaves the companys’s control. By this way, we can now allow business departments to choose their own cloud service provider, and yet keeping control over the identities and the security of the data (you can even connect this to your SIEM to get alerted appropriately).

Obvously, there is a catch-22 situation here from a market point of view: cloud service providers like to maintain users, and will integrate other identity stores only when they are ready, and the connection of existing identity stores depends on the willingness of the cloud service providers. Novell solves this problem by selling this to the operators that manage the cloud access for enterprise customers, but for those to be interested, CIOs need to formulate the demand… Clever approach, but may be tedious in selling. Anyway, it works technically, and those telcos that see security services as an added value will probably jump on it quite soon – once they get the real potential of such a solution.

The second big area of concern in the could besides using identities from managed sources is the security of information. Classical information security practices recommend to classify information according to confidentiality classes, and to define data management principles that must be applied by everyone to adequately protect the confidential data. Now everybody involved into that know how difficult it is to operationalize this strategy, namely to make sure the people are making the right choices when classifying (at all!) documents they create and/or handle.

The second product that I find pretty interesting is that by SecureIslands, called IQProtection, which does classification of documents based on several rules that can be defined (key words, sources, metatags etc.) AND – and this is new – integrates with a multitude of rights management technologies to immediately apply the necessary controls. They can even “change” the protection mechanism, e.g. when a document leaves the company, or when information is taken out of a web site (that can be protected as well) to be used with e-mail and S/MIME. Especially interesting is that they consider E-DRM as a commodity, and that they “only” deal with the management processes and the application of the protection mechanisms. Cool stuff, esp. when data is in the cloud. And of course, they can integrate with existing identity services for the credentials, to close the loop with my first example.

So, as said, I think the market is moving and we will see a lot of innovative stuff in the next months in that respect.


IT-SA conference takeaways

26.10.2010 by Sachar Paulus

A long time ago my last post… Anyway, lots of first-year students and research grant applications kept me busy.

The IT-SA is now THE event for IT-security in Germany. It has not the flavour of the RSA conference, altough it may actually be of a similar size, at least in the exhibition area. It is much more about small conferences around the exhibition floor, organized / owned by different people and groups, such as e.g. the AppSec conference in Germany or the KuppingerCole Enterprise Cloud Security summit. Consequently, and this is especially true for folks from abroad, don’t expect a huge number of people showing up at your booth – you need to organize traffic yourself.  But then – uh lala, lots of intense discussions…

A few takeaways more from the content point of view from my side about the IT-SA:

1. “bring your own device” is now a mainstream topic. Security folks: like it or not, you will need to cope with it. There are a number of arguments for this being financially-wise a good decision. But what does that mean security-wise, really? Well, my take is that the IT-security guys now need to think about how to protect corporate information instead of protecting the infrastructure from viruses. Come on, be honest: company confidential information is anyway already on devices that are not under your control, even today. The solution is: intelligent awareness, and – maybe some day – intelligent IRM.

2. IRM, IRM, IRM: the more I wandered along the different booths, the more I see the need for a good solution. All these different offerings that pretend to make your IT secure, but actually don’t (no, I won’t name them), all suffering from information not being protected adequately, still relying on a benign, controlled infrastructure. You that time is over, right? Unless you are a bank (you make your money yourself) or a government (you don’t even need money in the first place ;-) chances are quite bad that you know what is going on in your network aeh on your machines, aeh I mean on the devices in your network…

3. Privacy-friendly IDM: there is a trend to use IDM against people’s intention. And indeed, that may happen, if the data is under legitimate control of the authority maintaining the IDM information. Consequently, we need to think about how to make that happen in a privacy-friendly way. There are cryptographic protocols, and frameworks available, such as MS U-PROVE and the new German E-ID-Card. We need to spread the word that this is indeed possible!

And finally 4.: the Cloud is real. Companies do no longer think whether they will do it, but HOW, and how the security can be setup. Most importantly, companies were asking how to extend their security management processes to the cloud provider. And indeed, ISO 2700X et al can be applied, but they don’t provide operational help. ITIL is much better suited, but does not really cover confidentiality…


The GRC Marketplace is shaking up: SAP and CA partnering on GRC

11.08.2010 by Sachar Paulus

In the last weeks, I had a number of interviews and product / vendor briefings about GRC related products. And as you may have noticed, the marketplace is yet pretty unstructured. Since there is still no generally accepted common definition or reference architecture for GRC (altough I have developed one, see my reports), anyone touching functionality related to GRC assumes it is in the core. And so you can find extended document management solutions there (for policy managemnet) as well as controls and IT controls management tools, besides access governance and financial risk management applications.

I believe though that it makes only sense to actually implement a holistic GRC management framework in an enterprise, if there is a common, integrated and standardized way of managing policies, controls, risks, improvement projects. There is no value in buying a multitude of isolated, on certain aspects extremely well performing solutions, because then the integration know-how still relies with the people – and isn’t GRC actually exactly about reducing the risk that the enterprise is exposed to by people involvement, for personal, political or financial motivation?

The real value of implementing GRC projects only comes – very similar to ERP, history repeating – with an integrated framework. There are two ways of achieving this: first, by standardization (such as SOA), and second, by market dominance (such as R/3) . And to be true, none of the vendors I have been able to listen to is in my view in a position to advance the standardization path in that market.

With the recently announced partnership between SAP and CA, SAP pursues – similarly to Oracle – a pretty intelligent move: they will be able to integrate real-time information from SIEM and other solutions from CA, one of the established players in the IT infrastructure environment. The simple annoucement will shake up the space: until now, GRC was about prevention, mitigating activies, but the reaction part was left to the IT respectively other reaction facilities (fraud management, corporate security, e.g.). But with that partnership, GRC actively covers a “real-time” view on the threat / risk situation.

Another aspect is with the partnership of two giants, there will automatically be a de-facto-standardization happening. If, say, RSA now wants to provision SAP GRC too, they will need to adopt the interface definitions that the two have defined…

So: good move, SAP and CA.


Impressions from the IT-Analyst Event in London

19.07.2010 by Sachar Paulus

Last week I was invited to the IT-Security Analyst & CISO Forum Event in London, with a few vendors and a few CISOs. The form of the event is unique, and thanks to Eskenzi PR it is an excellent opportunity to gather the expectations from CISOs and the answers to these by vendors. Here are a few impressions and take-aways:

- “Most of the vendor’s products are crap, they are fundamentally flawed in the sense that they do not increase security a pence”, as one of the CISOs said (Chatham House rules applied). More specifically, asking for more details, most of the tool and product vendors are still relying on the wrong assumption that CISOs want to “extend the border of the enterprise” or “secure the perimeter”. But this is good for nothing, for businesses to be productive, information has to flow, and must be protected there – and not retained “within” the enterprise.

- Consequently, DLP (Data Leakage Prevention) is a market which does not really exist. Those that are buying DLP do this for compliance purposes, just like buying Anti-Virus products (although they do not even discover 40% of the more recent attacks…). So the chance of using actual DLP products to really detect resp. prevent information leakage is pretty low.

- Secure software development is still to a large extent not understood, neither by vendors nor by the CISOs. They mostly think that they are done with the subject when they employ white box testing and use an application level firewall. Oh man – so much work ahead to communicate what this is really about.

- Top-notch on their priority list (very interesting): the “bring in your own device” policy. How to enable business infrastructure and applications to securely support personal devices (from notebooks to smart phones) as endpoint. Very interesting direction, finally we got the “all in the internet” type of assumption for company information access through a more financial motivation…  Still, many questions around legal responsibilities and technical capabilities are to be solved.

Now to the vendors (just a few interesting notes):

- FaceTime (the name needs to change, after Apples announcement that their VideoConferencing on the iPhone is called that way) basically does compliance-driven monitoring and management of the usage of social media for enterprises. Seems low profile. But driven by customer innovation they have built a strong capability of detailed authorizations for internet apps, so they do in fact “GRC Access Control” for internet apps… Interesting development.

- S21Security from Spain, currently perceived as a SIEM vendor in the financial vertical, is actually able to detect fraud on the basis of log information of core banking systems, with first experiences in the SCADA world. So they actually do interesting GRC analytics…

- BeCrypt has a nice application to simply, but securely extend the enterprise using bootable USB sticks. Defence-grade!

- M86Security, one of the largest vendors of realtime threat detection for web, with a footprint of 24000 (!) customers, seem to be a pretty useful  solution – what if they would offer this “as a service” for consumers, that route their web traffic through one of their servers? Would be pretty cool…

All in all: the market slowly changes from pure compliance products towards real protection solutions. This is definitively a sign that the customers get more educated about the real threats. But on the other side (see the note on secure software above), still a long way to go…


Cloud Security = IDM+ERM, BUT: who will drive it is the real question!

29.06.2010 by Sachar Paulus

My last blog on the future necessities to really, really secure applications in the cloud was heavily discussed, which I think is a good sign, obviously there is something to discuss…

But let’s get a bit more down to the real problems. Of course, DRM is not the same thing as ERM (let me stick to ERM for the time being), and most of the companies having integrated DRM technology into their content offering have absolutely no clue about the potential complexity of access rights one might need in a company context – just look at the average number of enterprise roles for a medium-sized company. BUT: they are successful for two reasons:

a)  they are simplifying the processes and interfaces to the user as much as they can, and

b) they use one specific business process.

Maybe it is just the too-generic approach of most ERM offerings that is the reason for their relatively low usage. Some companies that actually start to “profile” specific ERM usages along the line of certain business processes in verticals (Adobe, Oracle to some extend what I have seen) may have understood this. So again, content context is key for leveraging ERM technology.

But the real hard problem is of course: how will we deal with protected digital documents (including XML “records”) across company boundaries? The myth of being in the center of everything by providing a proprietary format – and thus forcing the users to accept one specific solution – will not work as soon as processes cross multiple companies, just look back at PKI… So there is need for interoperability and standards.

But who will take the lead here? The content providers? Actually, I could imagine a future where a BI-report (sales pipeline e.g., real-time, once a day) is no longer protected by deep complex authorization objects in ERP / BI-report, but, the report is generated as a piece of content (maybe including video) and equipped with consumer-like protection (“this copy is for you, and you can send it to 3 friends…”). Sounds weird, but actually it is not that far from real: it may be simpler to do it that way than to map the complex ERP authorizations and roles via federated identity management and integrated, interoperable ERM to ERM-”authorizations” and to contact Access Decision Servers using standardized formats…

Don’t get me wrong, the “BI as Content Blob” protection concept is far from ideal, and the other mechanism would be the “real” solution… But to avoid such a situation (and I am sure such a model would find vast acceptance, except by the security responsibles ;) ), we need the major players to come together to address the following issues:

1) What needs to be standardized, exactly? Document formats? Authorization semantics? Exchange protocols? Policy mapping? Communication protocols with Access Decision Servers?

2) Who can contribute what? And from where to start? Simple solutions first to get things going, or doing it right from the beginning? Would that be a similar initiative like Liberty Alliance, or more a standardization effort like WS-*?

3) How to integrate the structured with this unstructured world? There are first attempts, but only based on bilateral integrations, without any standardization thinking (back at SAP, I drove this to some point, but only now first results can be seen…).

So the topic is much more difficult in reality than one might think. It is NOT solving the problem to use one of the ERM vendors. That would only solve local issues, and thereby produce others…


Cloud Security = Interoperability for DRM

17.06.2010 by Sachar Paulus

This week was very interesting for me. I have had a number of calls and meetings with people dealing with software components and architectures that will make  the cloud secure.

And the most interesting observation is: actually everything is there. We as an industry could simply start doing secure clouds right away. It is of course not so much about the standard stuff that we often hear: trust into the cloud providers, their ability to deal with data privacy requirements, or multi-tenancy capabiltiies of enterprise cloud services.

No. It is actually about how to secure the data between and within cloud services. And the key to achieve this is DRM Technology. Well, it is pretty straightforward when one thinks about data storage in the cloud, obviously Information Rights Management or Enterprise DRM will take over the role of drive encryption in cloud-based models for data sharing and storing.

But what seems to be less obvious is that the same technology can in principle also be used for protecting information within applications. Note that the media industry has already addressed a number of issues, such as streaming with DRM protection or multiple copies of the data.

There is one missing piece, though, well not really a missing piece: interoperability. The formats of DRM protected information are widely different from vendor to vendor, and there are three big players again: Microsoft, Adobe and Apple. It will be interesting to see how especially the battle between to two latter will affect how the protection formats will evolve.

And as with other battles for standardization, there will be room for companies to use this missing interoperability for developing tools helping with that. i’d be curious about who will take on that challenge…


My new iPad and Identity Management

01.06.2010 by Sachar Paulus

Today, I ordered my new iPad. I am really eager to use it, especially as a multi-purpose information and media home device. So far, so good. Obviously a device like this will be THE front end for the brave new Cloudy Web Services world. Whether via classical http(s) requests or via WS-*, the Apps on these kind of devices will make the Cloud happen to the average home user.

But: I am not sure how this fits into the identity management demands of these services. Haven’t we seen so much integration and convergence trends in the identity space in the last months? How does these actually match the front-end development trends? Obviously, the latter will be making the market, so how will the security guys follow?

Or, simply put: who does care about my credentials on the devices? Do I need a credential per app or what? We have put so much effort to get rid of this problem on standard platforms – how will the mobile market adopt these? Or will it simply be the provider who will take care – he knows our identity anyway…

Lots of unsolved security questions, not mentioning the need for data encryption at rest – rest? What rest? Aeh I mean “in memory encryption”…

So in the end I am not sure whether the iPad will make us more secure. I cannot even give a guess. That is bad.


Services
© 2014 Sachar Paulus, KuppingerCole