Database Security – a strategic perspective

31.03.2011 by Martin Kuppinger

In the recent months I’ve done a lot of research around database security, talking with vendors like Oracle, IBM (Guardium), Sentrigo (now McAfee), Imperva, Bitkoo, and some others as well as with several end user organizations who either are using database security products or evaluating those technologies.

When looking at the market it is very important to understand that it is not a homogeneous market. The different solutions range from firewalls to specific tools for label security or data masking. Some are tightly integrated with databases, others are non-intrusive. I will provide a broad overview in an upcoming research note which covers the entire database security market and the vendors therein.

But before selecting the right vendor and the right tool for your database environment, you should ask and answer another question: How does this fit into your overall IT security strategy and implementation? I’m not a friend of point solutions in security. Solving one problem without looking at all the other problems doesn’t necessarily increase the overall level of security achieved. It might give a better feeling, but frequently there is still too much attack surface left.

Just think about securing your databases with a firewall. Some of the attack surfaces left are:

  • Security issues in the applications which access data in the databases
  • Administrative actions
  • All actions performed locally at the database server
  • Copying or deleting the database with administrative access at the operating system level

And that’s just a short and incomplete list. From a strategic perspective, you have to look at how to secure the stack. Recently I’ve been at a customer who discussed about where to best start securing his apps. As a quick start, I proposed to him to build a simple spreadsheet with his (defined) 30 most critical apps and the stack these apps are using – including operating system, application platforms, hypervisors, and for sure the databases. That simple spreadsheet will give him an impression of the dependencies he has to keep in mind – it visualizes that security isn’t about point solutions.

I don’t say you should not invest in database security – but that should be one element of security. Thus, database security has to be put into context.

One interesting aspect within that are database firewalls. There are some firewalls out there, inspecting packets for SQL traffic based on policies. However, when inspecting packets – why not for everything? CIFS/SMB traffic to file servers? Web service security? That would allow to apply a consistent set of policies wherever it is appropriate. It would provide a consistent layer of security. For sure that won’t solve all problems, but the advantage in contrast to having a “DB firewall”, a “Sharepoint firewall”, a “CIFS/SMB firewall”, and so on is obvious. Another example is around privileged user (account, identity, access) management, e.g PxM. That is important for database management systems, but it is important for other types of systems (apps, operating system, hypervisors, network appliances,…) as well. I’d opt for a solution which covers all.

For sure there are as well many database specific aspects of security, like data masking and others. And given that there isn’t the “multi-purpose firewall” or other solutions which cover everything out there, it is about using several solutions. There is also some good reason for specialized tools – easier to implement, easier to manage, more specific features. However, they should be used as part of an overall strategy, not as isolated point solutions. Customers have to look at it from that perspective – and vendors should move forward to provide more integrated solutions over time.

Good security is achieved by strategy, not by tactics.

EIC 2011: Munich, May 10th to 13th – the place to be for IAM, GRC, Cloud Security, Database Security, and more…


RSA SecurID again

23.03.2011 by Martin Kuppinger

I’ve blogged last week about the RSA SecurID case. In the meantime there were several other posts and advices on that and I’d like to put together some thoughts from my side about that, looking at what customers should do now.

What should existing customers do short-term?

In most cases, RSA SecurID will be a standard mechanism for strong authentication which can’t be replaced immediately. If customers don’t use a solution for versatile authentication they usually aren’t able to opt for another (stronger) authentication mechanisms on the fly. Not using RSA SecurID however will make things even worse, because that would mean to step back to one factor with one or two means for authentication. Thus it is about staying with RSA SecurID and deciding about which additional actions to take – “compensatory controls”, e.g. increased auditing, additional fraud detection technologies, and so on.

Customers who have a versatile authentication approach in place might evaluate whether they can replace RSA SecurID with another factor – which then would be, for time and logistics reasons, an approach not depending on hardware. However doing that will be somewhat complex (helpdesk calls, technical aspects,…). Thus customers should first check whether the increased risk of using RSA SecurID is acceptable or not. Instead of replacing the option of adding another factor/means for interactions and transactions with high risk appears to be most appropriate. Besides this, the actions mentioned abovr in auditing have to be implemented.

What should existing customers do mid-term?

Replacing a technology like RSA SecurID is quite expensive. Given that RSA will harden its own systems and seeds can be changed over time, the threat will decrease. However, as mentioned in my last post, RSA SecurID never will be the same again. The mid-term answer, from my perspective, is versatility. Having more options for quickly changing to other and additional factors and means for authentication is the most promising approach. Thus, RSA SecurID is just one of multiple approaches.

For high risk environments, biometrics might come into play again (if not used yet). In addition there are some approaches of two-factor authentication which don’t rely on seeds and secrete algorithms. However they aren’t necessarily absolutely secure (if anything could be absolutely secure), thus customers should carefully evaluate whether other approaches provide real advantages above the established RSA SecurID approach. The same level of mistrust should be used for all types of authentication.

What should potential buyers do?

It is about re-evaluating the strategy for authentication. Versatility is key – and the strategies need to be re-thought if they are not focused on a versatile approach allowing different types of authentication mechanisms to be used and exchanged flexibly. Regarding RSA SecurID, the risk has to be rated again and decisions about whether the approach is sufficient for the interactions and transactions which have to protected have to be reviewed. From my perspective it is not that much about not using RSA SecurID (depending on what RSA does to increase security again, for sure – but I assume they will do a lot) but to carefully analyze the level of protection provided and weigh this against the risks of authentication fraud for what has to be protected. When deciding to use RSA SecurID appropriate controls have to be implemented – but that is true for any other authentication mechanism as well.

By the way: Regardless of the RSA SecurID approach, any authentication strategy which doesn’t focus on versatility, risk-based authentication/authorization and context-based authentícation/authorization should be re-thought.

Some general thoughts:

RSA has had a very strong image for their RSA SecurID approach – and it worked for many years. However there are two fundamental issues:

  • Centralized seeds
  • Confidential algorithm

Both are risks of that mechanism. Thus security is obviously limited. Regardless of which approach you use, thinking about the potential weaknesses (social phishing; central stores which might become target of attackers;…) is important. Unfortunately, security comes at a price, because there aren’t simple, cheap, easy-to-use approaches without logistics cost and other shortcomings which provide perfect security.

Again, like mentioned in my last post, we will discuss things like versatile authentication and the RSA SecurID incident at the EIC 2011. You shouldn’t miss that event.


RSA SecurID – it will never be the same again

18.03.2011 by Martin Kuppinger

Yesterday RSA informed about a security breach which they assume to be an “advanced persistent threat”, e.g. a long-running attack. In that case it was apparently against the seeds, e.g. the base material for keys which are used to generate OTPs. With other words: The first and (until now) stronger part of the two-factor authentication used with RSA SecurID cards isn’t secure anymore. In fact it shows that every approach which relies on a central database of such seeds has its inherent security weaknesses.

The importance of this breach becomes obvious when looking at the SEC 8-K filing – it is important enough that it required an official note to the stock markets.

The bad thing in that case is that its not a single RSA SecurID card which is compromised, which always would be the case with stolen or lost cards – it are potentially all RSA SecurID cards, because currently it is neither known which seeds are compromised nor by whom. RSA has (under the link above) provided hints for its customers on what to do now. They claim that there is no immediate impact for existing customers. But when looking at the actions recommended by RSA it becomes obvious that customers can’t do much – besides fundamental changes, e.g. RSA SecurID implementations based on new seeds (with all the consequences of such a change) or changes to other/additional authentication mechanisms.

Our recommendation for customers which have RSA SecurID cards implemented is to first carefully analyze the situation and their specific risks – e.g. which type of information is at risk if the RSA SecurID based authentication is not only at risk (like now) but an attack actually takes place? Based on that it is about to decide whether immediate actions have to take place (besides informing the users to be even more careful regarding phishing mails and so on, because the authentication strength no relies mainly on the PIN). These actions might range from increased threat analysis and forensics to adding other authentication technologies.

Beyond that, customers should take the opportunity and rethink their authentication strategies in general. How could they implement a flexible, versatile approach for all types of users and different security requirements of interactions and transactions? Versatility definitely helps – having a versatile approach for authentication in place allows to add additional factors and means or replace some factors quickly – which isn’t the case when only one specific mechanism (like RSA SecurID) is used.

What does it mean for RSA? In fact, the “trusted name in security” definitely will suffer – trust takes a long time to build and a very short time to loose. However, RSA isn’t that heavily depending on the success of RSA SecurID anymore. The company has broadend its portfolio. RSA SecurID is sort of a “cash cow”, but there are other approaches including versatile authentication solutions available. And, honestly: If you use RSA SecurID tokens as one element within a versatile authentication strategy, there is a value in that approach – especially given that RSA will do everything to ensure that such a breach never happens again (but there is always some risk of things happening which are unlikely). But it is only one approach of many, and the best thing customers can do is to become more flexible, more versatile.

Interestingly this fits well to research we are currently working on. One part of this will be a research note comparing the strength and usability of different authentication mechanisms, the other has a working title of “why hardware-based OTP is dead” – the latter not mainly because of the potential security risks, but because of the limited usability and the cost of logistics. Thus, have a look at our upcoming research. The topic will as well be discussed at the EIC 2011 (European Identity Conference), May 10th to 13th. Munich.


Database Security – a hot topic

16.03.2011 by Martin Kuppinger

During the last few months I’ve did a lot of research around database security, and some advisory. This market, with the very big player Oracle and its large number of offerings, and IBM as another heavyweight vendor, is growing rapidly. Besides the two big ones there are several specialized vendors like Sentrigo, Imperva, Bitkoo, NetIQ, and several others – I’ll cover the market soon in an upcoming research note which will provide an overview about all key players in that market. Have a look here regularly – the research note will be out latest around mid April… By the way: You’ll find there as well a new research note on Database Governance, the “umbrella” for database security.

But that’s not my main topic for that blog. One of the questions which pop up frequently around database security is whether specific tools really help. How about outgoing traffic when a database firewall only looks at incoming SQL requests? How about the risk of a root admin at the OS level deleting (or copying) database files? How about the price to pay for encryption? How about the technical users and the hard-coded business rules in applications accessing databases and filtering the result sets? And these are only some of the questions we hear in our webinars (with some around database security, available as recordings) and our advisories.

These questions all touch one point: Database security is not about implementing a single tool. There is a good reason for Oracle having many different tools in their portfolio, but even database security itself isn’t sufficient. It is about looking at the entire topic of Information Security, from the applications to databases (and business analytics!), the operating system, and the network. Understanding where and why information is at risk is key for defining the security strategy, the controls, and selecting the tools which in combination mitigate the risk to a level which appears to be acceptable.

Thus, successful database security strategies never ever can be defined only looking at the databases but need to be defined in the overall context of Information Security. Databases are part of the overall IT ecosystem, and database security is part of overall Information Security – as well as Database Governance is an element within Corporate Governance and IT Governance.  It’s always about “strategy first” when it comes to security – and it’s about avoiding point solutions.

You can learn a lot more about trends and best practices around Database Governance, Database Security, and overall Information Security at the European Identity Conference 2011, Munich, May 10th to 13th.


We need a policy standard for the use of data

10.03.2011 by Martin Kuppinger

One of the issues I’m confronted with in most of the advisories I’m doing is “how to protect information once it leaves a system”. A typical situation is that HR data leaves the HR system and is imported in another system – the identity provisioning system, a business analytics tool, or whatever else. Once information is out of HR, it is out of control. Lost somewhere in the happy hunting grounds of information…

However, from a governance perspective (and deu to many specific regulations) we have to keep control. PII has to be well managed, financial data has to be well managed, risk related information as well – and the same is true for many other types of information.

What can we do to address this? The first step is to have a governance process in place for the export/import interfaces. There needs to be a defined process to decide about which information is allowed to be exported for use in other systems. Knowing the paths data can take is an important step; having processes, approvals, and documentation in place definitely helps. Limiting the paths by using fewer systems to distribute information helps as well. You might have 100 interfaces between your HR (or whatever system you are currently looking at) and the target systems or one to a hub, which might be an identity provisioning tool or an ETL platform (Extract, Transform, Load) or something else.

However, these all are limited solutions. They don’t solve what happens with data in the target systems. For sure you can expand the processes and policies and try to ensure that the information is handled appropriately in the targets but that is a pretty hard task.

Soem time ago I discussed with Kim Cameron (Microsoft) about protecting PII which a user issues – how can the user keep control over “his” PII? Once he hands this over to a system he is out of control. We might apply techniques derived from IRM (Information Rights Management) to this data; however, that would be a big beast to tame with all the encryption, user management, and policies involved. Kim brought in his idea of using policies which are somewhat “sticky” to the PII and enforced by target systems.

For the foreseeable time this is probably the only approach which will really work, for any type of information flowing: Having a policy standard which defines the rules to use this data. The policy has to flow with the data and it has to be enforced by the target systems. It could include rules covering aspects like the sensitivy of information (PII, financial,…), whether it can be exported again, specific protection requirements, limitations for access (“only users with clearance …”), and so on. When information out of different sources is combined in a target systems, a resulting policy has to be calculated which again is sticky for that data.

That might sound a little complex but it isn’t really that difficult to implement, from my perspective. Is there any chance for success? I think so. Once a standard for such policies is defined and implemented by some of the larger vendors, it will become a decision criteria for information systems: Do they support this standard so that I’m better able to fulfill my governance and compliance requirements? Such a standard will help in achieving data governance and in supporting data owners (and many other players…)

Like with any standard, that requires large vendors to participate and to push the standard. Given that most large vendors are facing these issues, participation should be a no-brainer.

I’m very open to engage in starting such an effort – bringing the participants together, hand over to a standardization initiative, providing my thoughts as initial feeds, and helping in dissemination. Thus when you are interested to particiapte – just send me an eMail. Once I have sufficient initial support I’ll try to get the remaining big vendors in.

And, by the way: Don’t forget to register for EIC 2011 and Cloud 2011 - early bird will end soon! And maybe we can actively start this initiative there.


Posted in Database Security, GRC, Security | Comments Off

Android attacks – you shouldn’t be surprised

04.03.2011 by Martin Kuppinger

The news about a significant number of malicious apps for the Android platform on mobile phones hit the news yesterday. Many comments still sounded a little surprised. However there is no reason for being surprised. Today’s mobile phones are insecure by design. The vendors haven’t understood that security is mandatory for long term success and they are still selling devices which are as secure as a PC in the mid ’80s of last century. Unfortunately these devices are connected and have far more capabilities than the PCs of the early days.

The vendors (and developers of OSes) are just ignoring the need for built-in security. A PIN code is a ridiculous mechanism to protect a device which can hold that much sensitive data and which can be used to access sensitive corporate information. How about biometrics or other types of strong authentication? There are many potential solutions out there for mobile devices which are secure by design and still user-friendly.

In addition to the insecure devices and OSes, the concept of apps itself is insecure. How to manage apps for your corporate users? How to do DLP (Data Leakage Prevention) for apps? The concept of apps is as well insecure by design. Unfortunately, it is a good example for the wrong design principle “function follows form” – it should be “form follows function”. But the concept of apps is about markets and money, about a “cool” concept and not well-thought, because it isn’t secure (enough).

For organizations, the only consequence can be to review the policies for using mobile devices and massively restrict the professional use of devices which are insecure and have too many capabilities. That requires an analysis of which platforms are allowed for which use cases. You might argue that this won’t work because even the managers want to use their gadgets. Correct, it isn’t a simple task to do. However, in virtually every country there are laws which require that the board enforces an adequate risk management. Using insecure gadgets with access to sensitive corporate information (starting with eMail) is a risk which has to be mitigated by restricting the use of gadgets or more secure ways to use them. By not doing so (or even using insecure devices as a board member), legal requirements are ignored. I’d bet that the next hot topic for auditors will become mobile security…

For vendors, these new attacks hopefully are an alert which helps them to understand that security is a key requirement for long term success in the market. That might lead to invest more in security which is easy to use.

In the meantime we will see masses of point solutions and services to better protect mobile communication. Be careful with that – some might deliver a real value, others will turn out to be sort of placebos. But in any case, you first should have a strategy and policies for the secure use of mobile devices, before you invest in such point solutions and services.

It will be interesting to observe what happens in the next months. Will vendors wake up? Or will it need more and even more severe incidents for that?


Persons, identities, users, accounts – and which attributes to select from HR

03.03.2011 by Martin Kuppinger

One of the discussion which pops up in many advisories is around the terms and related object to use in Identity and Access Management. This is directly related to the question which attributes to use where and which to import from HR.

My best bet and experience is that customers should look at three levels ob objects:

  • Persons, e.g. the human being
  • Identities, e.g. a virtual representation. There might be several identities for one person. Someone might be the manager of several companies within an organization. Or someone is working as an internal in an insurance and as external sales person. Or someone is a customer and an employee.
  • Users or accounts, e.g. the representation in specific systems. There might be one or many accounts associated to an identity.

I didn’t cover service accounts and other things here – service accounts usually are associated with an identity which is responsible for that account. Thus it fits well into this list. Thus, technical users fit in there as well.

If you like that model, you should be somewhat careful when looking at vendor implementations. Many vendors have an oversymplified model which doesn’t support the person out-of-the-box. That could lead to some complex customization requirements and more complex provisioning tasks to keep person-related attributes in sync which then have to be stored per identity.

Now let’s look at the attributes:

  • Some of them are associated with the person – the ones which don’t change per identity, like name, surname, date of birth, or sex. And a unique identifier.
  • Some of them are associated with the identity, like the office location, the eMail address (most likely – there might be a primary eMail address defined for the person). And an identity-specific identifier which could change, like an employee number per company.
  • Finally there will be few attributes per account, which are specific to the systems.

Which attributes to pick:

  • Identifying ones – how can you clearly identify someone?
  • Describing ones – what do you need for the yellow pages?
  • Authorization focused ones – which do influence authorization decision, e.g. are in fact sort of “entitlements”?

Sounds easy – and, honestly: It is easy if you follow the best practices.


Posted in Provisioning | Comments Off
Services
© 2014 Martin Kuppinger, KuppingerCole