16.10.2012 by Martin Kuppinger
Yesterday there were two interesting news items about Google. A document issued by 24 of the 27 European Data Protection Councils requests Google to change their privacy policies. They claim that collection of personal data to such an extent as Google does is considered a massive risk for the privacy of users. I can agree. The Councils however don’t consider the policies as illegal, at least not yet. That might change with the upcoming new EU data protection rules in 2014. Nevertheless they request Google to better inform users about the use of their personal data.
I personally think that even that step might be helpful. From my perspective too many users just haven’t understood that there is no such thing as a free lunch. If you don’t pay with money, you have to pay in another currency. And the currency of Google is “privacy” or “personal data”.
It will be interesting to see what happens next. At least Google is facing some pressure from the EU now – and this most likely hasn’t been the last word in that issue.
The second issue Google is facing is about abusing its dominance of Internet search in violation of antitrust laws. An internal draft memo recommends suing Google for that. Again, it will be interesting to observe what happens here next.
Overall, it wasn’t a good day for Google yesterday.
04.10.2012 by Martin Kuppinger
Recently the “Internet Association” has been created. Their claim on the website is “We are the unified voice of the Internet economy”. They then state that they represent the interests of America’s (!) leading Internet companies and their global (!) community of users. The real message follows afterwards: “We are dedicated to advancing public policy solutions to strengthen and protect internet freedom, foster innovation and economic growth and empower users”.
This could also be read somewhat differently: We are the lobbyist organization which will try to avoid everything that can stop us from building our business based on the data we have obtained about our users. We don’t want enforcement of privacy laws. In fact, the Internet Association has been created in response to the DNT (“do not track”) and is targeted against the FTC (Federal Trade Commission) and others who are driving this concept. Do not track signals that the user doesn’t want tracking of his activities. Even while DNT has been the main reason for founding the Internet Association, this is just a symptom of the bigger topic behind.
Whether this empowers their users is more than questionable. It might empower users to use some features for “free”, which in fact is paying in the currency of privacy instead with real money. But it clearly doesn’t empower the users to better control their privacy on the Internet.
When looking at the list of the (as of now) fourteen members of this association, it comes as no surprise that the usual suspects like Facebook, Google, or LinkedIn are amongst them – all of them with a long history of dealing with privacy in a questionable way. On the other hand, Microsoft is not a member of that community.
It is very important to understand what this association is about. The understanding of “internet freedom” is more the one of a robber baron, not the one of democracy. It is about their freedom to do business the way they want, not being hindered by privacy regulations. It is not about the freedom of choice for the users to decide about what happens with their data. My colleague Craig Burton some time ago has written a post in another context – but his claim therein applies here as well:
Freedom of Choice != Your Choice of Captor
The counter argument to this position is that users might want to personalize their online experience, like “tell me when I’m near something I’m interested in”. So tracking in and of itself is not necessarily bad. However, who is doing the tracking and what happens to that data is problematic.
The point is: It is still about freedom of choice. DNT enables the user to make his decisions. Best of all that is done in a way where the user can opt-in for tracking, where he can opt-out, and where he can decide about who can track and who not. Given that there is massive tracking without any information to the user today, DNT technology and regulation is mandatory from my perspective.
Besides this, I’m not really convinced by the results of tracking. Even in the days of big data, we end up with two problems: My historic interest doesn’t necessarily match my current interests. And even when it matches, the proposals made frequently are just weak. To give you an example: Some time ago I had been looking for some new camera lenses on Amazon (I buy a lot via Amazon mainly due to the simplicity and the reliable delivery). I’ve been only looking for FX lenses for the Nikon system (full format sensor). Amazon right afterwards started sending me masses of mails with proposals of lenses for the DX system (APS-C format sensor) and other things I definitely don’t need.
So to sum up:
- I strongly believe in the right of the individual to have freedom of choice. That clearly applies to tracking in the browser and on mobile devices.
- Tracking might be helpful – but despite all efforts and big data technology, the results provided today are at best mediocre (on average).
24.07.2012 by Martin Kuppinger
Apple recently removed the app Clueful, provided by the IT security software vendor Bitdefender, from its App Store. That at first glance isn’t momentous news. However, when looked at in a little more detail, it raises some questions.
The iOS app Clueful had been available in App Store for about two months. It had been approved by Apple back then. Bitdefender, even while being pretty cautious in what they are telling the public, says:
Apple informed Bitdefender’s product development team of the removal – for reasons we are studying – after it was approved under the same rules.
This is a pretty interesting statement. There are things which are studied. But it looks like Apple decided to let this app pass and then removed it based on the same set of rules.
Clueful is an app which checks other apps. It analyzes whether apps are sending personal data unencrypted, whether they access the contacts, whether they track the current location of the user, and so on. It shows that information and explains what these facts mean to the user. Simply said, and in context of all the things that have happened in iOS, this is obviously a useful app for the users.
Bitdefender also had sent out a press release that, according to their research, 42.5% of the apps do not encrypt users’ personal data, even when accessed via public Wi-Fi. 41.4% of the apps track user locations unbeknownst to them. And almost 20% of the apps still can access the entire Address Book, some not even notifying the user.
However, from the Apple perspective and the view of many other app providers, it is dangerous app because it reveals “too much”. So what are the reasons for removing that app from App Store? Bitdefender doesn’t tell exactly and Apple doesn’t comment at all.
However, what shall I say? Honi soit qui mal y pense.
It is at least worthwhile to further follow this issue and to see what Bitdefender and Apple will say about it in the future – and whether the Clueful app (or other apps with that functionality) will be available in the App Store again soon.
And it is time to let the user decide about privacy. At least they should be kept informed so that they can make decisions. And once they’ve made a decision, that should be accepted – unlike for example in the Twitter app, which doesn’t take a no for a no but asks again and again, whether users don’t want to allow access to “Twitter contacts” and so on.
17.07.2012 by Martin Kuppinger
Germany has, in contrast to many other countries, a mandatory citizen registration. One side effect is the national ID card (now an eID). Another is that there are registration offices at every local authority. And there is a law called “Melderechtsrahmengesetz” (MRRG) which rules everything about this registration. A few days ago the German Bundestag passed a revision of this law, and did it during the semi-final of the European Football Championship (Real football, played by feet and with a ball; not American football, played by hand and with an egg) between Germany and Italy. That explains why it took a little while for the outcry of the masses to develop.
The MRGG revision at first glance appears to be a success for the lobbyists of the marketing industry and the ones dealing in addresses. In a draft of November 2011 the law required explicit consent of the citizen for the registration offices to pass the data to someone else. In the revision which recently passed the Bundestag, the citizen has to explicitly withhold consent – and the “opt-out” is only accepted if the party requesting the data didn’t already possess it (which for example would be the case if someone participated in a contest and gave away his address data) and simply wants to validate or change data. In addition, the range of attributes which can be requested by 3rd parties is now much larger than before.
However, the MRGG had these provisions quite some time ago. The ability to request data of others without consent had been introduced in 1938 by the former minister of the interior Frick (who was sentenced to death in the “Nürnberger Prozesse”) by order, bypassing the parliament. Its purpose was, amongst others, to push denunciation. The current MRGG still allows basic requests of anyone about everyone at the registration offices without unveiling a purpose. There is neither an opt-in nor an opt-out in the current version of the law. This is in contrast to the basic right of self-determination regarding personal data which has been defined by the German Federal Constitutional Law. The revision of the law isn’t intended to remove data protection from the MRGG but, for the first time, adds data protection to that law.
Today organizations like the Schufa (providing financial “health” information about individuals to banks and others), the Federal Office of the Protection of the Constitution (“Bundesamt für Verfassungsschutz”) and private investigators are the main “customers” of the registration offices. This is, by the way, not for free – a single query costs in the range for 5 € to 10 € per registration office – and it might require to ask a number of registration offices to find someone. So it is somewhat unlikely that address dealers and the marketing industry in general will use the new options on a broad scale. There is just no valid business model behind this.
Obviously, there is a need to find a balance between privacy and the interests of marketing and others to access some data. Besides, there has been a strong need to update the law which dated back to 1980 in its current version, but which still was based on the 1938 law. So the current, very emotional discussion appears a little extreme to me – and it isn’t based on facts. There are things which need to be changed: There is a need for consent. This is introduced for the first time with this revision of the MRGG. The amount of data should be carefully evaluated, thinking of “minimal disclosure” instead of providing masses of data. That is an area for improvement of the law. Access shouldn’t be for free – it isn’t for free today and it won’t be for free in future. But on the other hand, no one should complain if he gave his data to a company when participating in a contest – there was a price paid for that data in some way.
Given that this law has not only to pass the Bundestag but also the Bundesrat (the upper house of the German parliament), it is very likely that some parts will be changed before it becomes effective. That is the positive thing with privacy being back in public discussion. The critical aspect is that the discussions are emotional and sometimes even hysterical, not fact-based. And, in the case of the MRGG, that there is no real need to make changes to version which passed the Bundestag.
The lesson we could learn from this is to work based on facts and not on emotions – even when it comes to privacy discussion. Not every access to personal data is bad per se.
12.03.2012 by Martin Kuppinger
This year’s CeBIT, the world’s largest IT fair, has the topic of “Managing Trust”. For some reason, the “Deutsche Messe”, the company behind CeBIT, decided to have Eric Schmidt as one of the speakers at the official opening ceremony anyhow. Right after the speech of Schmidt, Microsoft sent out a press release “Ralph Haupter comments on CeBIT opening”. Ralph Haupter is the General Manager of Microsoft Germany. The summary of this press release is simple: According to Microsoft, Eric Schmidt just missed the topic. He didn’t talk about managing trust but about some opportunities of the digital world of the future (as Google sees that future). From the Microsoft perspective, it is also about security, privacy, transparency, fair access.
I can’t remember any press release like that, with one vendor commenting that harsh a leading spokesperson of another vendor. Microsoft has positioned itself especially in Europe as an advocate of privacy and data protection laws. They recently announced new versions of their Office 365 contracts which fully take into account the EU requirements. So is this entire thing about privacy?
From my perspective, privacy is only a battlefield which Microsoft has detected in a bigger fight against Google. Microsoft has a long history around security and privacy and they had to learn their lessons the hard way. A good article on that could be found here.
Right now, Microsoft on one hand sees the opportunity to pass the buck (in German it would be passing the “Schwarzer Peter”) to another vendor, with Google being the hottest candidate besides Facebook and (yes!) Apple.
But it’s beyond playing that game- it’s about business. Microsoft believes in a business which is based on accepting privacy and data protection laws. With its strategy it focuses on shredding the image of Google as a provider to the enterprise. Microsoft earns its money mainly with paid services. They were pressed by Google to provide solutions like Office 365. Now they strike back. Microsoft wants to avoid Google becoming a real competitor for that type of business. And by pointing at the privacy issues, Microsoft rubs salt in the wound of Google. Overall, it is about the question whether a business which is based on ignoring privacy can be successful over time or whether companies are more willing to pay for models which rely on accepting the rules for privacy and data protection.
There still a lot of open questions: Bing costs a lot of money. How will Microsoft finance that without an advert model like Google? Yesterday a customer said to me that he switched from Google to Bing quite a while ago because he really hated to see trouser adverts for weeks once he had been looking for a trouser in the Internet. How close are some of the Microsoft Live offerings to what Google does? Currently I observe that Microsoft is carefully watching not to cross the line.
At the end, this fight is only one amongst many others. ACTA and copyright laws in general, QoS (Quality of Service) for some providers or not, software patents and many other discussions are in fact all about the same basic topic: How free is the Internet? And where should this freedom end? It’s also about “might is right” versus a more regulated and thus protective environment.
09.03.2012 by Martin Kuppinger
Another recent discussion was about Microsoft blaming Google and Facebook for circumventing IE privacy policies. There were many articles about that issue, two of them you’ll find here:
However blaming IE for not allowing the users to do the cool things users want to do is definitely the wrong approach. It is about allowing the user to choose what he wants to do. At least it is about accepting that users might opt for privacy. If P3P fails in this from the viewpoint of Google and Facebook, then we need another standard. But clearly, if the user expresses his will of keeping some privacy, actively bypassing this would be nothing else than an attack. I don’t really see a difference in acting that way to other types of attacks like phishing attacks and all the other types of malware we are confronted with on a daily basis.
The interesting question is now about what really is the case. Let’s look at some options. There are some cases around what Microsoft could do in IE:
1. Microsoft IE interprets P3P statements correctly
1a. Microsoft ignores incorrect P3P statements and allows access (to privacy-relevant information in the broadest sense)
1b. Microsoft interprets incorrect P3P statements and denies access (or asks the user)
2. Microsoft IE misinterprets P3P statements
On the other hand, there are some cases for companies like Facebook and Google providing P3P policies:
A. They don’t provide any P3P policy.
B. They provide something that has nothing to do with a P3P policy (like Facebook does).
C. They provide incorrect information about how they deal with privacy, but as a correct P3P policy.
D. They provide incorrect P3P policies.
D1. They do this accidentally.
D2. They do this to bypass the IE privacy settings.
It becomes obvious by just looking at the different cases that there are many situations. You can build a matrix and then decide on whom to blame. I want specifically to look at the situation of case 1b and case C. Having Microsoft IE ask the user for permission and thus inform him about a potential privacy violation would be the best approach from my perspective. In that case, IE would either ask the users in case A, B, and D or deny access to privacy-relevant information at all. So it is about case C – that would be the attack: Someone sending P3P policy information, but not acting according to that policy. Simply said: Case C from my perspective always is about attacking the user.
Honestly, I don’t have sufficient information to decide whom to really blame in the end. It looks as if IE could be more rigid regarding the way it deals with P3P policies. However, that might be inconvenient to some users (the privacy agnostic ones). But with a simple option to deactivate P3P “monitoring”, this could be circumvented. So some users might opt for giving away their privacy while others might opt for more control.
And that would again be about letting the user decide. And, like stated above: If the users opts for privacy, any active bypassing of this is illegitimate at best and potentially illegal.
08.03.2012 by Martin Kuppinger
There has been a lot of noise around Google changing its privacy policies. My esteemed colleague Dave Kearns said that they just consolidated them. I’ll stay with “changed”, due to the effect of this: Google now can do much more with the user’s data – if the user logs into any Google service. So my point is that discussions about changing or consolidating is splitting hairs. In fact they have changed the way they deal with privacy.
Google claims to have done this because their customers want it. I doubt that. Customers want Single Sign-On. But does anybody really believe that customers want Google to have a complete profile of virtually everything they are doing on the Internet? And does anybody really believe that customers are seeking for perfectly targeted advertising? [the choice isn’t “ads or no ads” but “targeted or non-targeted” and I believe people prefer targeted] The same customers that are zapping the TV channels during adverts? Or does anyone really believe a customer wants to “sign-on” to his search engine?
So that appears to just be a very lame excuse for something Google believes is the business model of the future. I have the strong belief that this is the business model of the past. In ten years from now, the real successful businesses will be the ones who build a model on providing value to the customer while ensuring their privacy.
Does this affect the customer? It depends. There is always a choice. My choice has been to finally delete Google from the list of my search providers in Internet Explorer. Given that I never have been really active in Google groups and other Google applications, I only relied on Google search. Now I’ve changed to Bing as the standard search provider and finally deleted all my Google accounts (which, by the way, is much easier and more intuitive than deleting a Facebook account).
From an analyst perspective, it will be interesting to see the mid-term effect that Google’s policy changes will have on the market. How will this affect the market shares in the search engine market? What about other markets Google is playing in? It will be also interesting to observe whether and how others like Microsoft can finance their investments into, for example, Bing without tapping into the privacy-violation-trap. And it will be interesting to observe to which degree (and in which regions) the customers will opt for privacy and vote with their feet.
By the way: Forget about deleting your Google history. You can delete the accounts and the associated data, but not what Google has collected in the past. It’s too late for that.
Regarding what Google is doing and what it means I recommend reading Kim Cameron’s recent post:
Kim really brings it to the point – and the US Attorney Generals do as well. You really should read that post.
Also: The Data Privacy Council of the European Parliament, consisting of Privacy Officers of member countries, has declared the new privacy rules of Google as being not compliant with the European Privacy laws. They recommended in a letter to Larry Page to not apply these policies until this issue is finally clarified. Google rejected this. That might form the foundation for Google becoming the first prominent case under the new EU Privacy laws, allowing, probably starting in 2014, fines up to 5% of the annual revenue. That might even make Larry Page rethink the Google position.
When looking at the loud calls for new, draconian political policies, my view is simple: If there are alternatives, the ones opting for privacy can use them. However, if the will of the user then is ignored and settings are bypassed, I’m a friend of draconian penalties. So it might and should be allowed to sell services and pay with your privacy – but this model has to depend on user consent, there have to be options for deleting that data and thus changing the will, and doing it without this consent is just unacceptable. Interestingly, new approaches of Personal Life Management like www.personal.com also allow the use of private data and rely on that – but there is user consent and control. They might not be perfect yet and the business model still has to prove that it works. But it’s obvious that it isn’t mandatory to give up privacy (at least not beyond a specific point, sort of the Rubicon Google now has crossed) to gain the advantages of the “modern Internet” (e.g. the services users might want to have).
31.01.2012 by Martin Kuppinger
Last Friday, I received two identical emails from LinkedIn contacts informing me about changes in the privacy conditions of LinkedIn. Without user consent, LinkedIn is now allowed to use names and pictures of the users in advertisements. Users can revoke the permission in a simple way (see below). However, what LinkedIn has done raises the question whether the providers of today’s social networks never will learn their privacy lessons.
LinkedIn once again has shown the fundamental misunderstanding of social network providers, that all data therein is their data. However, it is the data of the users, not of the social network. There are some upcoming approaches like personal.com which change that paradigm and give users control over their data. Changing privacy policies in a way like LinkedIn just shows that they probably never will understand this.
But even when you look at what LinkedIn has done from a business perspective, it doesn’t really make sense. What is the value of using the names and pictures of users in advertisements? I don’t believe that it is a really big value. However, changing privacy policies without informing users and without asking for consent automatically has led to a lot of negative reactions, like mails LinkedIn users are sending to their contacts to inform them about this change or like press articles and blogs. To me it appears that the negative impact is far bigger than the positive outcome of that change.
LinkedIn has successfully managed to change its image from being a fairly serious network for business professionals to being just another bad guy like Facebook and the others. Maybe they will learn from the reaction of their users, but, I doubt that. It looks like the classical social networks which build their value on the understanding that everything we enter is automatically theirs, won’t ever learn that lesson. At least not until other concepts become sufficiently successful to drive them out of the market. But then it might be too late.
To change the privacy settings use the following steps:
1. Place the cursor on your name at the top right corner of the screen. From the small pull down menu that appears, select “Settings”
2. Then click “Account” on the left/bottom
3. In the column next to Account, select the option “Manage Social Advertising”
4. Finally un-tick the box “LinkedIn may use my name and photo in social advertising”
5. and Save
15.06.2011 by Martin Kuppinger
One of the intensively discussed issues in Cloud Computing is compliance with local data protection and privacy laws. The European laws, for instance, are sort of “location-dependent”. It is much easier to deal with PII (Personally Identifiable Information) within the borders of the EU than outside of that region. That is the reason why many large Cloud Providers build data centers within the EU to support their European customers.
The question which recently came to my mind is: Does it really make sense to focus on location? Shouldn’t we better focus on the information security itself? The target is to avoid abuse of PII and other sensitive information. The laws focus on processing, with a very broad definition of the term “processing”. Processing is allowed only if the providers are following specific rules. However: When we clearly define these rules, when we audit the providers, when we do certification – why should the location really matter?
You could argue that there are regions where you won’t expect the government to respect these rules. You could argue that there are countries like the US where some laws are contradictory to European laws. However, that all could be easily part of the rules defined in the law. There is no logical reason to do it by location. If you look at the way governments in some European countries act I wouldn’t say that location is the best choice for enforcing data protection.
From my perspective it would be a good idea to fundamentally re-think data protection laws and to define protection requirement levels for different types of PII and different types of processing. Then rules for the requirements external (cloud) providers have to fulfill can be defined – independent of the location. If one of these rules is contradictory to the local laws in the country the provider has its data center the result would be the same as today. But overall, we would end up with far more flexibility for the cloud.
However, IT reality is always far ahead of the laws. Thus we probably have to accept that it will take many years until the laws reflect the reality of today’s globally distributed, service-based IT.
04.05.2011 by Martin Kuppinger
The data theft at Sony has been in the headlines for some days now. What makes me most wonder is that – from what I’ve read and heard first – even the passwords were stored unencrypted. However, Sony claims to have used a hash to protect these passwords. It looks like Sony also has stored the credit card numbers plus the associated security codes (which are, by the way, one of the most ridiculous approaches to enhance security) together and, no surprise, unencrypted. But if Sony has used hash values: Why did everyone assume that these passwords become common knowledge (at least for the hackers and their “customers”)?
But let’s start with passwords: Even while it is still done frequently, it is anything but good practice to store passwords unencrypted. You not even need to store them encrypted. Just store a hash, apply the same mathematical algorithm to passwords entered and compare the hashes. Even while some of the algorithms in that area aren’t “bullet-proof” that is far better than storing millions of passwords unencrypted. Storing passwords unencrypted is such a fundamental error that you just can call that grossly negligent. That is not a simple fault but ignorance against fundamental security requirements – even more, when that information is associated with credit card information and other types of highly sensitive data like bank accounts. If Sony has stored hash values that would be good practice, depending a little on the algorithm used. That reduces the risk for the Sony customers even while there is still some risk of having the hash values being stolen. Passwords might be derived from these for example based on brute-force attacks.
Let’s look at the next point. Sony has become, from what we know, a victim of an external attack. Accessing large numbers of data most likely involves a SQL injection attack. Interestingly, the Sony Playstation website has been hit by such an attack before, some three years ago. Given that something happened before raises the question why Sony didn’t protect information better. Haven’t they heard about database security tools and especially database firewalls? That’s exactly the type of technology which helps you protecting data like (if you have them) hashed or unprotected passwords or credit card data. We recently had several webinars on database security and database governance, the last one yesterday about database firewalls specifically. All the recordings are available.
Overall it looks like this hasn’t been the most sophisticated hack ever. It looks like no internals were involved (which would lead to the topic of PxM, e.g. protection against privileged access/users). It looks like Sony just has ignored not even best or good practices, but in many areas even average practices in security.
The bad thing about this is, that Sony isn’t alone out there when it comes to ignoring good/best practices in security. The most common reason is that they just don’t think about security – either because it is too complex or because of the price to pay for security. Hopefully, the Sony case alerts some of the others to review their security and to improve it. However, there is a saying in German that hope dies at last. And I feel that this is more about hoping than about really expecting web sites to become more secure by design.
By the way: European Identity Conference, to be held next week in Munich, is about information security, IAM, GRC, and database security. A good place to learn more and to meet the analysts of KuppingerCole to discuss Information Security issues in person.