How to deal with Data Sprawl? Could a sticky policy standard help?

21.07.2011 by Martin Kuppinger

Data Sprawl appears to me to be one of the biggest challenges in information security. And, by the way, Data Sprawl is not an issue that is specific to Cloud Computing. It is a problem organizations are facing day by day.

What happens when data is extracted from a SAP system? One example: a CSV (flat) file is created with some data from the HR system. This file is delivered to another system, in best case using some secure file transfer. But what happens then? That other systems processes the file in some way or another. It might export some or all of the data, which then ends up in yet another system. And so on…

The point is: Once data leaves a system, data is out of control.

The problem is that this might happen not only with one CSV file but with 100′s of them. And dozens of systems exporting and importing that data. Governance is difficult to implement. You can define a process for allowing exports. You might defined even rules for the use of exported data. You might review the exports regularly – are they still needed? However, reviewing what happens with the data at the target systems (are the rules enforced?) is pretty complex. But there is, up to now, no technical solution to solve that problem.

Things become even worse with Data Warehouse and Business Analytics. Data frequently ends up in large data stores and is analyzed. That means that data is combined, sometimes exported again, and so on. How do you keep control? Implementing Access and Data Governance for Business Analytics systems is a big challenge, and auditors frequently identify severe risks in that area – which is no surprise at all.

Another scenario is PII in the Internet. If we give some PII to some provider for some reason, how could we ensure that he doesn’t give that PII away? No way, I’d say. We might use special eMail addresses or faked information to track back some abuse of PII, but that’s not really a solution.

So what to do? Short term, it is about implementing processes which at least try to minimize Data Sprawl and the associated risk, like mentioned above. These processes and policies are far from perfect. That helps internally, but not for PII.

We might use (very) long-term technical solutions like homomorphic encryption and other technologies which are developed around the “minimal disclosure” approaches to address some of the issues. We then might use an approach like Information Rights Management which works not no a document basis but on an attribute basis. But most of these things will help us sometimes in the future, if ever.

But what about defining a policy standard which is sticky to the data? A standard which describes how data could be used? If systems support this standard, they could enforce it. That would be about having such a standard and allowing exports at least of sensitive data only to systems which support the standard and enforce the policies. If data is split up, the policy has to be sticky to all parts (as long as it applies to all parts). If data is combined, policies have to be combined – the intersection of the policies applies then.

Such an approach has limitations, because it will first of all need some people to define the standard. And, like with all standards, it is about the critical mass. On the other hand: Virtually every organization has the problem of Data Sprawl and lacks a valid answer to the questions which are asked in the context of Data and Access Governance. Thus, there is a real need for such a standard. From my perspective, the large vendors in the markets of Business Applications (e.g. ERP, CRM, and related systems), of Business Analytics, and of all the ETL and EAI applications are the ones who should work on such a standard, because they are the ones who have to support it in their systems. And they should start quickly, because their customers are increasingly under pressure from the auditors.


The Sony case – or how to best ignore security best practices

04.05.2011 by Martin Kuppinger

The data theft at Sony has been in the headlines for some days now. What makes me most wonder is that – from what I’ve read and heard first – even the passwords were stored unencrypted. However, Sony claims to have used a hash to protect these passwords. It looks like Sony also has stored the credit card numbers plus the associated security codes (which are, by the way, one of the most ridiculous approaches to enhance security) together and, no surprise, unencrypted. But if Sony has used hash values: Why did everyone assume that these passwords become common knowledge (at least for the hackers and their “customers”)?

But let’s start with passwords: Even while it is still done frequently, it is anything but good practice to store passwords unencrypted. You not even need to store them encrypted. Just store a hash, apply the same mathematical algorithm to passwords entered and compare the hashes. Even while some of the algorithms in that area aren’t “bullet-proof” that is far better than storing millions of passwords unencrypted. Storing passwords unencrypted is such a fundamental error that you just can call that grossly negligent. That is not a simple fault but ignorance against fundamental security requirements – even more, when that information is associated with credit card information and other types of highly sensitive data like bank accounts. If Sony has stored hash values that would be good practice, depending a little on the algorithm used. That reduces the risk for the Sony customers even while there is still some risk of having the hash values being stolen. Passwords might be derived from these for example based on brute-force attacks.

Let’s look at the next point. Sony has become, from what we know, a victim of an external attack. Accessing large numbers of data most likely involves a SQL injection attack. Interestingly, the Sony Playstation website has been hit by such an attack before, some three years ago. Given that something happened before raises the question why Sony didn’t protect information better. Haven’t they heard about database security tools and especially database firewalls? That’s exactly the type of technology which helps you protecting data like (if you have them) hashed or unprotected passwords or credit card data. We recently had several webinars on database security and database governance, the last one yesterday about database firewalls specifically. All the recordings are available.

Overall it looks like this hasn’t been the most sophisticated hack ever. It looks like no internals were involved (which would lead to the topic of PxM, e.g. protection against privileged access/users). It looks like Sony just has ignored not even best or good practices, but in many areas even average practices in security.

The bad thing about this is, that Sony isn’t alone out there when it comes to ignoring good/best practices in security. The most common reason is that they just don’t think about security – either because it is too complex or because of the price to pay for security. Hopefully, the Sony case alerts some of the others to review their security and to improve it. However, there is a saying in German that hope dies at last. And I feel that this is more about hoping than about really expecting web sites to become more secure by design.

By the way: European Identity Conference, to be held next week in Munich, is about information security, IAM, GRC, and database security. A good place to learn more and to meet the analysts of KuppingerCole to discuss Information Security issues in person.


Database Security – a strategic perspective

31.03.2011 by Martin Kuppinger

In the recent months I’ve done a lot of research around database security, talking with vendors like Oracle, IBM (Guardium), Sentrigo (now McAfee), Imperva, Bitkoo, and some others as well as with several end user organizations who either are using database security products or evaluating those technologies.

When looking at the market it is very important to understand that it is not a homogeneous market. The different solutions range from firewalls to specific tools for label security or data masking. Some are tightly integrated with databases, others are non-intrusive. I will provide a broad overview in an upcoming research note which covers the entire database security market and the vendors therein.

But before selecting the right vendor and the right tool for your database environment, you should ask and answer another question: How does this fit into your overall IT security strategy and implementation? I’m not a friend of point solutions in security. Solving one problem without looking at all the other problems doesn’t necessarily increase the overall level of security achieved. It might give a better feeling, but frequently there is still too much attack surface left.

Just think about securing your databases with a firewall. Some of the attack surfaces left are:

  • Security issues in the applications which access data in the databases
  • Administrative actions
  • All actions performed locally at the database server
  • Copying or deleting the database with administrative access at the operating system level

And that’s just a short and incomplete list. From a strategic perspective, you have to look at how to secure the stack. Recently I’ve been at a customer who discussed about where to best start securing his apps. As a quick start, I proposed to him to build a simple spreadsheet with his (defined) 30 most critical apps and the stack these apps are using – including operating system, application platforms, hypervisors, and for sure the databases. That simple spreadsheet will give him an impression of the dependencies he has to keep in mind – it visualizes that security isn’t about point solutions.

I don’t say you should not invest in database security – but that should be one element of security. Thus, database security has to be put into context.

One interesting aspect within that are database firewalls. There are some firewalls out there, inspecting packets for SQL traffic based on policies. However, when inspecting packets – why not for everything? CIFS/SMB traffic to file servers? Web service security? That would allow to apply a consistent set of policies wherever it is appropriate. It would provide a consistent layer of security. For sure that won’t solve all problems, but the advantage in contrast to having a “DB firewall”, a “Sharepoint firewall”, a “CIFS/SMB firewall”, and so on is obvious. Another example is around privileged user (account, identity, access) management, e.g PxM. That is important for database management systems, but it is important for other types of systems (apps, operating system, hypervisors, network appliances,…) as well. I’d opt for a solution which covers all.

For sure there are as well many database specific aspects of security, like data masking and others. And given that there isn’t the “multi-purpose firewall” or other solutions which cover everything out there, it is about using several solutions. There is also some good reason for specialized tools – easier to implement, easier to manage, more specific features. However, they should be used as part of an overall strategy, not as isolated point solutions. Customers have to look at it from that perspective – and vendors should move forward to provide more integrated solutions over time.

Good security is achieved by strategy, not by tactics.

EIC 2011: Munich, May 10th to 13th – the place to be for IAM, GRC, Cloud Security, Database Security, and more…


Database Security – a hot topic

16.03.2011 by Martin Kuppinger

During the last few months I’ve did a lot of research around database security, and some advisory. This market, with the very big player Oracle and its large number of offerings, and IBM as another heavyweight vendor, is growing rapidly. Besides the two big ones there are several specialized vendors like Sentrigo, Imperva, Bitkoo, NetIQ, and several others – I’ll cover the market soon in an upcoming research note which will provide an overview about all key players in that market. Have a look here regularly – the research note will be out latest around mid April… By the way: You’ll find there as well a new research note on Database Governance, the “umbrella” for database security.

But that’s not my main topic for that blog. One of the questions which pop up frequently around database security is whether specific tools really help. How about outgoing traffic when a database firewall only looks at incoming SQL requests? How about the risk of a root admin at the OS level deleting (or copying) database files? How about the price to pay for encryption? How about the technical users and the hard-coded business rules in applications accessing databases and filtering the result sets? And these are only some of the questions we hear in our webinars (with some around database security, available as recordings) and our advisories.

These questions all touch one point: Database security is not about implementing a single tool. There is a good reason for Oracle having many different tools in their portfolio, but even database security itself isn’t sufficient. It is about looking at the entire topic of Information Security, from the applications to databases (and business analytics!), the operating system, and the network. Understanding where and why information is at risk is key for defining the security strategy, the controls, and selecting the tools which in combination mitigate the risk to a level which appears to be acceptable.

Thus, successful database security strategies never ever can be defined only looking at the databases but need to be defined in the overall context of Information Security. Databases are part of the overall IT ecosystem, and database security is part of overall Information Security – as well as Database Governance is an element within Corporate Governance and IT Governance.  It’s always about “strategy first” when it comes to security – and it’s about avoiding point solutions.

You can learn a lot more about trends and best practices around Database Governance, Database Security, and overall Information Security at the European Identity Conference 2011, Munich, May 10th to 13th.


We need a policy standard for the use of data

10.03.2011 by Martin Kuppinger

One of the issues I’m confronted with in most of the advisories I’m doing is “how to protect information once it leaves a system”. A typical situation is that HR data leaves the HR system and is imported in another system – the identity provisioning system, a business analytics tool, or whatever else. Once information is out of HR, it is out of control. Lost somewhere in the happy hunting grounds of information…

However, from a governance perspective (and deu to many specific regulations) we have to keep control. PII has to be well managed, financial data has to be well managed, risk related information as well – and the same is true for many other types of information.

What can we do to address this? The first step is to have a governance process in place for the export/import interfaces. There needs to be a defined process to decide about which information is allowed to be exported for use in other systems. Knowing the paths data can take is an important step; having processes, approvals, and documentation in place definitely helps. Limiting the paths by using fewer systems to distribute information helps as well. You might have 100 interfaces between your HR (or whatever system you are currently looking at) and the target systems or one to a hub, which might be an identity provisioning tool or an ETL platform (Extract, Transform, Load) or something else.

However, these all are limited solutions. They don’t solve what happens with data in the target systems. For sure you can expand the processes and policies and try to ensure that the information is handled appropriately in the targets but that is a pretty hard task.

Soem time ago I discussed with Kim Cameron (Microsoft) about protecting PII which a user issues – how can the user keep control over “his” PII? Once he hands this over to a system he is out of control. We might apply techniques derived from IRM (Information Rights Management) to this data; however, that would be a big beast to tame with all the encryption, user management, and policies involved. Kim brought in his idea of using policies which are somewhat “sticky” to the PII and enforced by target systems.

For the foreseeable time this is probably the only approach which will really work, for any type of information flowing: Having a policy standard which defines the rules to use this data. The policy has to flow with the data and it has to be enforced by the target systems. It could include rules covering aspects like the sensitivy of information (PII, financial,…), whether it can be exported again, specific protection requirements, limitations for access (“only users with clearance …”), and so on. When information out of different sources is combined in a target systems, a resulting policy has to be calculated which again is sticky for that data.

That might sound a little complex but it isn’t really that difficult to implement, from my perspective. Is there any chance for success? I think so. Once a standard for such policies is defined and implemented by some of the larger vendors, it will become a decision criteria for information systems: Do they support this standard so that I’m better able to fulfill my governance and compliance requirements? Such a standard will help in achieving data governance and in supporting data owners (and many other players…)

Like with any standard, that requires large vendors to participate and to push the standard. Given that most large vendors are facing these issues, participation should be a no-brainer.

I’m very open to engage in starting such an effort – bringing the participants together, hand over to a standardization initiative, providing my thoughts as initial feeds, and helping in dissemination. Thus when you are interested to particiapte – just send me an eMail. Once I have sufficient initial support I’ll try to get the remaining big vendors in.

And, by the way: Don’t forget to register for EIC 2011 and Cloud 2011 - early bird will end soon! And maybe we can actively start this initiative there.


Approaches to secure your data in databases

17.02.2010 by Martin Kuppinger

Last week I had an interesting briefing with IBM regarding their Guardium acquisition. With that acquisition of a company specialized on database security, IBM becomes the second large vendor investing in that area, following Oracle who has Database Security products in its portfolio for some years now. The IBM/Guardium deal fits pretty well in the current time, when looking at the increasing problem of information theft. Besides IBM and Guardium there are some smaller vendors in that market which I will cover in another post near-time.

IBM Guardium, in contrast to the Oracle approach, is not tied to a specific database management system but works as an external solution. There are obviously pros and cons for both approaches. Performance, administration, flexibility regarding the defined policies and other aspects differ significantly. Thus, before choosing solutions, a detailed analysis of these approaches should be performed (and KuppingerCole will provide a market overview for database security around April which might be a good starting point for such an analysis).

The entry of IBM in that market shows an increasing maturity and relevance of this particular IT market segment. And it raises the question of which role database security can play within IT security. From my perspective, it is an interesting area which is mandatory to protect sensitive information. Information in databases is at risk, and cases like BKK or the stolen data from Swiss banks offered to the German government prove that. However, this is just one element within an IT security strategy focusing on authorized access to data. Securing the database with the wrong policies or with giving away privileged accounts to untrustworthy parties won’t help much. Thus, database security projects never ever should be driven by the database guys but must be understood as an element within IT security blueprints. Only a consistent approach to security will really reduce the security risks and thus the related operational risks.

Even more I think that database security always will be somewhat limited in its scope. Once data is outside the database, it doesn’t protect the data anymore. On the long run we might have to fundamentally rethink the concepts of today’s databases and make them “security-aware”. What do I mean by that? Data within databases should be inherently protected. Think about applying concepts we find today in Information Rights Management (IRM) at the document-level at a much more granular level to data within databases, ensuring that any record (or part of a record) can only be accessed according to defined policies. Such an approach would have massive impact on the existing technology. How to index? How to deal with encrypted information? How to define these policies? However, if you look at database security from a very fundamental point-of-view, it becomes obvious that applying database security to existing databases won’t fully solve the problem because it is only about “data at rest”.

Nevertheless I think that any organization has to think about implementing database security in the meantime, until we have better solutions sometimes in the far future – I’d expect fundamental changes to database technology to take at least 10-15 years to become ready for mass adoption. It might take even a little longer. To cite John Maynard Keynes, the famous economist who focused on theories with a short-term view when being critized for not looking at long-term evolutions: “On the long term we are all dead”. Given that, short-term we should evaluate and implement existing database security approaches, rethink the authentication and authorization approaches within databases (using the GRANT statement a little bit more detailed…) and integrate this with our overall IT security and governance approaches (and especially IAM). In the meantime, the vendors have to think about how to do the next fundamental step to make DBMS inherently security-aware.


Services
© 2014 Martin Kuppinger, KuppingerCole