FIDO Alliance announces final FIDO 1.0 specifications

10.12.2014 by Alexei Balaganski

Yesterday, culminating over 20 months of hard work, FIDO Alliance has published final 1.0 drafts of their Universal Authentication Framework (UAF) and Universal 2nd Factor (U2F) specifications, apparently setting a world record in the process as the world’s fastest development of a standard in the Identity Management industry.

I wrote a post about FIDO Alliance in October, when the first public announcement of the specifications has been made. Since that time, I’ve had an opportunity to test several FIDO-compatible solutions myself, including the Security Key and Yubikey Neo-N from Yubico, as well as the FIDO ready fingerprint sensor in my Galaxy S5 phone, which now lets me access my PayPal account securely. I’ve studied the documentation and reference code for building U2F support into web applications and cannot wait to try it myself, seeing how easy it looks. Probably the only thing that’s stopping me right now is that my favorite browser hasn’t implemented U2F yet.

Well, I hope that this will change soon, because that’s what publishing finalized specifications is about: starting today FIDO alliance members are free to officially market their FIDO Ready strong authentication solutions and non-members are encouraged to deploy them with the peace of mind, knowing that their implementation will interoperate with current and future products based on these standards. Press coverage of the event seems to be quite extensive, with many non-technical publications picking up the news. I believe that to be another indication of importance of strong and simple authentication for everyone. Even those who do not understand the technical details are surely picking up the general message of “making the world free of passwords and PINs”.

Those who are interested in technical details would probably be interested in the changes in the final version since the last published draft. I’m sure these can be found on FIDO Alliance’s website or in one of their webinars. What is more important, however, is that products released earlier remain compatible with the final specification and that we should expect many new product announcements from FIDO members really soon. We should probably expect more companies to join the alliance, now that the initiative is gaining more traction. Mozilla Foundation, that includes you as well!

In the meantime, my congratulations to FIDO Alliance on another important milestone in their journey to the future without passwords.

 

 


Posted in Security | No comments

Quis custodiet ipsos custodes?

08.12.2014 by Alexei Balaganski

Or, if your Latin is a bit rusty, “who is guarding the guards themselves”? This was actually my first thought when I’ve read an article published by Heise Online. Apparently, popular security software from Kaspersky Lab, including at least their Internet Security and Antivirus, is still susceptible to the now-well-known POODLE exploit, which allows hackers to perform a man-in-the-middle attack on an SSL 3.0 connection by downgrading the level of encryption and effectively breaking its cryptographic security.

When this vulnerability was published in September, many security researchers called for immediate demise of SSL 3.0, which is a very outdated and in many aspects weak protocol, however quite a lot of older software still doesn’t support TLS, its modern replacement. At the end, many web services, as well as all major browser vendors have implemented some sort of protection against the exploit, either by disabling SSL 3.0 completely or by preventing downgrade attacks using TLS_FALLBACK_SCSV. For a couple of months, we felt safe again.

Well, turns out that getting rid of POODLE isn’t as easy as we thought – it’s not enough to harden both ends of the communication channel, you have to think about the legitimate “men-in-the-middle” as well, which can still be unpatched and vulnerable. This is exactly what happened to Kaspersky’s security products: as soon as the option “Scan encrypted connections” is enabled, they will intercept an outgoing secure connection, decrypt and analyze its content, and then reestablish a new secure connection to the appropriate website. Unfortunately, this new connection is still using SSL 3.0, ready to be exploited.

Think of it: even if you have the latest browser that explicitly disables SSL 3.0, your antivirus software would secretly make your security worse without letting you know (your browser will be connecting to the local proxy using new TLS protocol, which looks perfectly safe). Just like I was writing regarding the Heartbleed bug in April: “there is a fundamental difference between being hacked because of ignoring security best practices and being hacked because our security tools are flawed”. The latter not only adds insult to injury, it can severely undermine user’s trust in security software, which at the end is bad for everyone, even the particular vendor’s competitors.

The problem seems to be originally discovered by a user who posted his findings on Kaspersky’s support forum. I must admit I find the support engineer’s reply very misleading: the SSL vulnerability is by no means irrelevant, and one can imagine multiple scenarios where it could lead to sensitive data leaks.

Well, at least, according to Heise, the company is working on a patch already, which will be released sometime in January. Until then you should think twice before enabling this option: who is going to protect your antivirus after all?


Posted in Security | No comments

Regin Malware: Stuxnet’s Spiritual Heir?

26.11.2014 by Alexei Balaganski

As if IT security community hasn’t had enough bad news recently, this week has begun with a big one: according to a report from Symantec, a new, highly sophisticated malware has been discovered, which the company dubbed “Regin”. Apparently, the level of complexity and customizability of the malware rivals if not trumps its famous relatives, such as Flamer, Duqu and Stuxnet. Obviously, the investigation is still ongoing and Symantec, together with other researchers like Kaspersky Lab and F-Secure are still analyzing their findings, but even those scarce details allow us to make a few far-reaching conclusions.

Let’s begin with a short summary of currently known facts (although I do recommend reading the full reports from Symantec and Kaspersky Lab linked above, they are really fascinating if a bit too long):

  1. Regin isn’t really new. Researchers have been studying its samples since 2012 and the initial version seems to have been in use since at least 2008. Several components have timestamps from 2003. Makes you appreciate even more how it managed to stay under radars for so long. And did it really? According to F-Secure, at least one company affected by this malware two years ago has explicitly decided to keep quiet about it. What a ground for conspiracy theorists!
  2. Regin’s level of complexity trumps practically any other known piece of malware. Five stages of deployment, built-in drivers for encryption, compression, networking and virtual file systems, utilization of different stealth techniques, different deployment vectors, but most importantly a large number of various payload modules – everything indicates a level of technical competence and financial investment of a state-sponsored project.
  3. Nearly half of affected targets have been private individuals and small businesses and the primary vertical the malware appears to be targeting is telecommunications industry. According to Kaspersky Lab’s report, code for spying on GSM networks has been discovered in it. Geographically, primary targets appear to be Russia and Saudi Arabia, as well as Mexico, Ireland and several other European and Middle Eastern countries.

So, is Regin really the new Stuxnet? Well, no. Surely, its incredible level of sophistication and flexibility indicates that it most certainly is a result of a state-sponsored development. However, Regin’s mode of operation is completely opposite to that of its predecessor. Stuxnet has been a highly targeted attack on Iranian nuclear enrichment facilities with the ultimate goal of sabotaging their work. Regin, on the other hand, is an intelligence-gathering spyware tool, and it doesn’t seem to be targeted on a specific company or government organization. To the contrary, it’s a universal and highly flexible tool designed for long-term covert operations.

Symantec has carefully avoided naming a concrete nation-state or agency that may have been behind this development, but the fact that no infections have been observed in the US or UK is already giving people ideas. And, looking at the Regin discovery as a part of a bigger picture, this makes me feel uneasy.

After Snowden’s revelations, there’s been a lot of hope that public outcry and pressure on governments will somehow lead to major changes limiting intelligence agencies’ powers for cyber spying. Unfortunately, nothing of that kind has happened yet. In fact, looking at the FUD campaign FBI and DoJ are currently waging against mobile vendors (“because of your encryption, children will die!”) or the fact that the same German BND intelligence service that’s promoting mandatory encryption is quietly seeking to install backdoors into email providers and spending millions on zero-day exploits, there isn’t much hope for a change left. Apparently, they seem oblivious to the fact that they are not just undermining trust in the organizations that supposedly exist to protect us from foreign attackers, but also open new attack surfaces for them by setting up backdoors and financing development of new exploits. Do they honestly believe that such a backdoor or exploit won’t be discovered and abused by hackers? This could probably be a topic for a separate blog post…

Isn’t it ironic that among all the talks about Chinese and Russian hackers, the biggest threat to our cybersecurity might come from the West?


Posted in Security | No comments

Big News from the FIDO Alliance

23.10.2014 by Alexei Balaganski

FIDO Alliance (where FIDO stands for Fast IDentity Online) is an industry consortium formed in July 2012 with a goal to address the lack of interoperability among various strong authentication devices. Currently among its members are various strong authentication solution vendors (such as RSA, Nok Nok Labs or Yubico), payment providers (VISA, MasterCard, PayPal, Alibaba), as well as IT industry giants like Microsoft and Google. The mission of the FIDO Alliance has been to reduce reliance on passwords for authentication and to develop specifications for open, scalable and interoperable strong authentication mechanisms.

KuppingerCole has been closely following the progress of FIDO Alliance’s developments for the last couple of years. Initially Martin Kuppinger has been somewhat skeptical about the alliance’s chances to gain enough support and acceptance among the vendors. However, seeing how many new members were joining the alliance, as well as announcements like the first FIDO authentication deployment by PayPal and Samsung earlier this year would confirm their dedication to lead a paradigm shift in the current authentication landscape. It’s not just about getting rid of passwords, but about giving users the opportunity to rely on their own personal digital identities, potentially bringing to an end the current rule of social logins.

After years of collaboration, Universal Authentication Framework and Universal 2nd Factor specifications have been made public in October 2014. This has been closely followed by several announcements from different Alliance members, unveiling their products and solutions implementing the new FIDO U2F standard.

One that definitely made the biggest splash is, of course, Google’s announcement of strengthening their existing 2-step verification with a hardware-based second factor, the Security Key. Although Google has been a strong proponent of multifactor authentication for years, their existing infrastructure is based on one-time codes sent to users’ mobile devices. Such schemes are known to be prone to various attacks and cannot protect users from falling victim to a phishing attack.

The Secure Key (which is a physical USB device manufactured by Yubico) enables much stronger verification based on cryptographic algorithms. This also means that each service has its own cryptographic key, meaning that users can reliably tell a real Google website from a fake one. Surely, this first deployment based on a USB device has its deficiencies as well, for example, it won’t work on current mobile devices, since they all lack a suitable USB port. However, since the solution is based on a standard, it’s expected to work with any compatible authentication devices or software solutions from other alliance members.

Currently, U2F support is available only in Google Chrome browser, but since the standard is backed by such a large number of vendors including major players like Microsoft or Salesforce, I am sure that other browsers will follow soon. Another big advantage of an established standard is availability of libraries to enable quick inclusion of U2F support into existing client applications and websites. Yubico, for example, provides a set of libraries for different languages. Google offers open source reference code for U2F specification as well.

In a sense, this first U2F large-scale deployment by Google is just the first step in a long journey towards the ultimate goal of getting rid of passwords completely. But it looks like a large group sharing the same vision has much more chances to reach that goal earlier that anybody planning to walk all the way alone.


Posted in Security | No comments

GlobalSign acquires Ubisecure, plans to win the IoE market

30.09.2014 by Alexei Balaganski

GlobalSign, one of the world’s biggest certificate authorities and a leading provider of digital identity services, has announced today that it has acquired Ubisecure, a Finnish privately held software development company specializing in Identity and Access Management solutions.

Last year, KuppingerCole has recognized Ubisecure as a product leader in our Leadership Compass on Access Management and Federation. Support for a broad range of authentication methods including national ID cards and banking cards, as well as integrated identity management capabilities with configurable registration workflows have been noted as the product’s strengths. However, it is the solution’s focus on enabling identity services on a large scale, targeted at governments and service providers, which KuppingerCole has noted as Ubisecure’s primary strength.

Unfortunately, until recently the Helsinki-based company has only been present in EMEA (mainly in the Nordic countries), obviously lacking resources to maintain a strong partner network. GlobalSign’s large worldwide presence with 9 international offices and over 5000 reseller partners provides a unique possibility to bring Ubisecure’s technology to a global market quickly and with little effort.

GlobalSign, established in 1996, is one of the oldest and biggest, as well as reportedly the fastest growing certificate authorities on the market. After becoming a part of the Japanese group of companies GMO Internet Inc. in 2006, GlobalSign has been steadily expanding its enterprise presence with services like enterprise PKI, cloud-based managed SSL platform, and strategic collaborations with cloud service providers. With the acquisition of Ubisecure, the company is launching its new long-term strategy of becoming a leading provider of end-to-end identity services for smart connected devices, powering the so-called Internet of Everything.

Market analysts currently estimate that up to 50 billion of such devices (or simply “things”) will be connecting to the Internet within the next 10 years. This may well be the largest technology market in history, with over $14 trillion at stake. Needless to say, the new trend brings new critical challenges that have to be addressed, such as device security and malware protection, however, probably the biggest of all is going to be providing identity services on a massive scale, mediating trust for billions on online transactions between people and “things” every minute and ensuring safety of e-commerce, communications, and content delivery.

A company that manages to bring a service with such capabilities to the market first will definitely be in a very attractive position, and GlobalSign, with their strong background in identity-related solutions, massive existing customer base and a large partner network, is aspiring to grab that position by making Ubisecure’s innovative technology available globally. Time will tell how well they can compete against technological giants on the market, as well as against other API vendors with strong IAM background (Ping Identity and CA / Layer 7 come to mind). Still, recognizing a rare combination of innovative technology and solid market presence, we believe them to be a player in the market that is definitely worth looking at.


Posted in Security | No comments

First Heartbleed, now Shellshock?

29.09.2014 by Alexei Balaganski

Half a year has passed since the discovery of the dreaded Heardbleed bug, and the shock of that incident, which many have dubbed the most serious security flaw in years, has finally begun to wear off. Then the security community has been shocked again last week, when details of a new critical vulnerability in another widely used piece of software have been made public after the initial embargo.

Apparently, Bash, arguably the most popular Unix shell software used on hundreds of millions of servers, personal computers, and network devices, contains a critical bug in the way it’s processing environment variables, which causes unintentional execution of system commands stored in those variables (you can find a lot of articles explaining the details, ranging from pretty simple to deeply technical). Needless to say, this provides an ample opportunity for hackers to run malicious commands on affected machines, whether they are connected to the network or not. What’s worse, the bug has remained unnoticed for over twenty years, which means that huge numbers of legacy systems are affected as well (as opposed to Heartbleed, which was caused by a bug in a recent version of OpenSSL).

Given the huge number of affected devices, many security researchers have already called Shellshock “bigger than Heartbleed”. In my opinion, however, comparing these two problems directly isn’t that simple. The biggest problem with the Heartbleed bug was that it has affected even those companies that have been consistently following security best practices, simply because the most important security tool itself was flawed. Even worse, those who failed to patch their systems regularly and were still using an old OpenSSL version were not affected.

Shellshock bug, however, is different, since Bash itself, being simply a command-line tool for system administrators, is usually not directly exposed to the Internet, and the vulnerability can only be exploited through other services. In fact, if your IT staff has been following reasonably basic security guidelines, the impact on your network will already be minimal, and with a few additional steps can be prevented completely.

The major attack vector for this vulnerability are naturally CGI scripts. Although CGI is a long outdated technology, which, quite frankly, has no place on a modern web server, it’s still found on a lot of public web servers. For example, the popular Apache web server has a CGI module enabled by default, which means that hackers can use Shellshock bug as a new means to deploy botnet clients on web servers, steal system passwords and so on. There have already been numerous reports about attacks exploiting Shellshock bug in the wild. Researchers also report that weaknesses in DHCP clients or SSH servers can potentially be exploited as well, however this requires special conditions to be met and can be easily prevented by administrators.

So, what are our recommendations on dealing with Shellshock bug?

For consumers:

First of all, you should check whether your computers or network devices are affected by the bug at all. Vulnerable are computers running different Unix flavors, most importantly many Linux distributions and OS X. Obviously, Windows machines are not affected unless they have Cygwin software installed. Most embedded network devices, such as modems and routers, although Linux-based, use a different shell, BusyBox, which doesn’t have the bug. As for mobile devices, stock iOS and Android do not contain Bash shell, but jailbroken iOS devices and custom Android firmwares may have it installed as well.

A simple test for checking whether your shell is vulnerable is this command:

env X="() { :;} ; echo vulnerable" /bin/sh -c "echo hello"

If you see “vulnerable” after running it, you know you are and you should immediately look for a security update. Many vendors have already issued patches for their OS distributions (although Apple is still working on an official patch, there are instructions available for fixing the problem DIY-style).

For network administrators:

Obviously, you should install security updates as well, but to stop there would not be a good idea. Although a series of patches for currently described Bash vulnerability has already been issued, researchers warn that Bash has never been designed for security and that new vulnerabilities can be discovered in it later. A reasonable, if somewhat drastic consideration would be to replace Bash on your servers with a different shell, since just about every other shell does not interpret commands in environment variables and is therefore inherently invulnerable to this exploit.

Another important measure would be to check all network services that can interact with Bash and harden their configurations appropriately. This includes, for example, the ForceCommand feature in OpenSSH.

Last but not the least, you should make sure that your network security tools are updated to recognize the new attack. Security vendors are already working on adding new tests to their software.

For web application developers:

Do not use CGI. Period.

If you are stuck with a legacy application you still have to maintain, you should at least put it behind some kind of a “sanitizing proxy” service that would filter out requests containing malicious environment variables. Many vendors offer specialized solutions for web application security, however, budget solutions using open source tools like nginx are possible as well.

So, if Shellshock bug can be fixed so easily, why are security researchers so worried about it? The main reason is a sheer number of legacy devices that will never be patched and will remain exposed to the exploit for years. Another burning question for IT departments is: how long hackers (or worse, NSA) have been aware of the bug and for how long they could have been secretly exploiting it? Remember, the upper limit for this guess is 22 years!

And of course, in even longer perspective, the problem raises a lot of new questions regarding the latest IT fad: the Internet of Things. Now that we already have smart fridges and smart cars and will soon have smart locks and smart thermostats installed everywhere, how can we make sure that all these devices remain secure in the long term? Vendors predict that in 10 years there will be over 50 billion “things” connected to a global network. Can you imagine patching 50 billion Bash installations? Can you afford not patching your door lock? Will you be able to install an antivirus on your car? Looks like we need to have a serious talk with IoT vendors. How about next year at our European Identity and Cloud Conference?


Posted in Security | No comments

Real-time Security Intelligence: history, challenges, trends

02.09.2014 by Alexei Balaganski

Information security is just as old as Information Technology itself. As soon as organizations began to depend on IT systems to run their business processes and to store and process business information, it has become necessary to protect these systems from malicious attacks. First concepts of tools for detecting and fighting off intrusions into computer networks were developed in early 1980s, and in the following three decades security analytics has evolved through several different approaches, reflecting the evolution of IT landscape as well as changing business requirements.

First-generation security tools – firewalls and intrusion detection and prevention systems (IDS/IPS) – have essentially been solutions for perimeter protection. Firewalls were traditionally deployed on the edge of a trusted internal network and were meant to prevent attacks from the outside world. First firewalls were simply packet filters that were effective for blocking known types of malicious traffic or protecting from known weaknesses in network services. Later generation of application firewalls can understand certain application layer protocols and thus provide additional protection for specific applications: mitigate cross-site scripting attacks on websites, protect databases from SQL injections, perform DLP functions, etc. Intrusion detection systems can be deployed within networks, but old signature-based systems were only capable of reliably detecting known threats and later statistical anomaly-based solutions were known to generate an overwhelming number of false alerts. In general, tuning an IDS for a specific network was always a difficult and time-consuming process.

These traditional tools are still widely deployed by many organizations and in certain scenarios serve as a useful part of enterprise security infrastructures, but recent trends in the IT industry have largely made them obsolete. Continued deperimeterization of corporate networks because of adoption of cloud and mobile services, as well as emergence of many new legitimate communication channels with external partners has made the task of protecting sensitive corporate information more and more difficult. The focus of information security has gradually shifted from perimeter protection towards detection and defense against threats within corporate networks.

The so-called Advanced Persistent Threats usually involve multiple attack vectors and consist of several covert stages. These attacks may go on undetected for months and cause significant damage for unsuspecting organizations. Often they are first uncovered by external parties, adding reputation damage to financial losses. A well-planned APT may exploit several different vulnerabilities within the organization: an unprotected gateway, a bug in an outdated application, a Zero-Day attack exploiting a previously unknown vulnerability and even social engineering, targeting the human factor often neglected by IT security.

By the mid-2000s, it was obvious that efficient detection and defense against these attacks requires a completely new approach towards network security. The need to analyze and correlate security incidents from multiple sources, to manage a large number of alerts and to be able to perform forensic analysis has led to development of a new organizational concept of Security Operations Center (SOC). An SOC is a single location where a team of experts is monitoring security-related events of entire enterprise information systems and taking actions against detected threats. Many large enterprises have established their own SOCs and for smaller organizations that cannot afford considerable investments and maintaining a skilled security staff on their own, such services are usually offered as a Managed Security Service.

The underlying technological platform of a security operations center is SIEM: Security Information and Event Management – a set of software and services for gathering, analyzing and presenting information from various sources, such as network devices, applications, logging systems, or external intelligence sources. The term has been coined in 2005 and the concept has been quickly adopted by the market: currently there are over 60 vendors offering SIEM solutions in various forms. There was a lot of initial hype around the SIEM concept, as it was offered as a turnkey solution for all security-related problems mentioned above. The reality, however, has shown that, although SIEM solutions are very capable sets of tools for data aggregation, retention and correlation, as well as for monitoring, alerting and reporting of security incidents, they are still just tools, requiring a team of experts to deploy and customize and another team to run it on daily basis.

Although SIEM solutions are currently widely adopted by most large enterprises, there are several major challenges that, according to many information security officers, are preventing them from efficiently using them:

  • Current SIEM solutions require specially trained security operations experts to operate; many organizations simply do not have enough resources to maintain such teams.
  • Current SIEM solutions generate too many false positive alerts, forcing security teams to deal with overwhelming amounts of unnecessary information. Obviously, current correlation and anomaly detection algorithms are not efficient enough.
  • The degree of integration offered by current SIEM solutions is still insufficient to provide a truly single management console for all kinds of operations. Responding to a security incident may still require performing too many separate actions using different tools.

Another common shortcoming of current SIEM solutions is lack of flexibility when dealing with unstructured data. Since many of the products are based on relational databases, they enforce applying rigid schemas to collected information and do not scale well when dealing with large amounts of data. This obviously prevents them from efficiently detecting threats in real time.

Over the last couple of years, these challenges have led to the emergence of the “next-generation SIEM” or rather a completely new technology called Real-time Security Intelligence (RTSI). Although the market is still in its early stage, it is already possible to summarize the key differentiators of RTSI offerings from previous-generation SIEM tools:

  • Real-time or near real-time detection of threats that enables quick remediation before damage is done;
  • Possibility to correlate real-time and historical data from various sources, as well as apply intelligence from external security information services, thus detecting malicious operations as whole events, not separate alerts;
  • Small number of clearly actionable alarms by reducing the false positive rate, as well as introducing different risk levels for incidents;
  • Automated workflows for responding to detected threats, such as, for example, disrupting clearly identified malware attacks or submitting a suspicious event to a managed security service for further analysis.

The biggest technological breakthrough that made these solutions possible is Big Data analytics. The industry has finally reached the point, when business intelligence algorithms for large-scale data processing, previously affordable only to large corporations, have become commoditized. Utilizing readily available frameworks such as Apache Hadoop and inexpensive hardware, vendors are now able to build solutions for collecting, storing and analyzing huge amounts of unstructured data in real-time.

This makes it possible to combine real-time and historical analysis and identify new incidents as being related to others that occurred in the past. Combined with external security intelligence sources that provide current information about the newest vulnerabilities, this can greatly facilitate identification of ongoing APT attacks on the network. Having a large amount of historical data at hand also significantly simplifies initial calibration to the normal patterns of activity of a given network, which are then used to identify anomalies. Existing RTSI solutions are already capable of automated calibration with very little input required from administrators.

Alerting and reporting capabilities of RTSI solutions are also significantly improved. Big Data analytics technology can generate a small number of concise and clearly categorized alerts to allow even an inexperienced person to make a relevant decision, yet provides a forensic expert with much more details about the incident and its relations with other historical anomalies.

As mentioned above, the RTSI market is still in its early stage. There are many new offerings with various scopes of functionality from both established IT security vendors as well as startups available today or planned for release in near future. It is still difficult to predict in which direction the market will evolve and which features should be expected from an innovation leader. However, it is already clear that only the vendors that will offer complete solutions and not just set of tools will win the market. It is important to understand that Real-time Security Intelligence is more than just SIEM 2.0.

This article was originally published in the KuppingerCole Analysts’ View Newsletter. Also check out video statements of my colleagues Mike Small and Rob Newby on this topic.


Posted in Security | No comments

Did someone just steal my password?

07.08.2014 by Alexei Balaganski

Large-scale security breaches are nothing new. Last December we’ve heard about the American retail chain Target’s network hack, when over 40 million credit cards and 70 million addresses have been stolen. This May, eBay announced that hackers got away with more than 145 million of their customer data. And the trend doesn’t stop: despite of all the efforts of security researchers and government institutions, data breaches occur more frequently and get bigger and more costly. The average total cost of a data breach for a company is currently estimated at $3.5 million. The public has already heard about these breaches so often that it became a bit desensitized to them. However, the latest announcement from an American company Hold Security should definitely make even the laziest people sit up and take notice.

Apparently, a gang of cybercriminals from Russia, which the company dubbed CyberVor (“cyber thief” in Russian), have managed to amass the largest known collection of stolen credentials, over 1.2 billion passwords and more than 500 million email addresses! The company hasn’t revealed a lot of details, but these were not, of course, spoils of a single breach – the gang has allegedly compromised over 420 thousand websites over the course of several years. Still, the numbers are overwhelming: the whole collection contains over 4.5 billion records. Surely, I can be somewhere in that huge list, too? What can I do to prevent hackers from stealing my precious passwords? Can someone help me with that?

In a sense, we still live in the era of the Internet Wild West. No matter how often the passwords are proclaimed dead and how hard security vendors are trying to sell their alternative, more secure authentication solutions, no matter how long government commissions are discussing stricter regulations and larger fines for data breaches – way too many companies around the world are still storing their customers’ credentials in clear text and way too many users are still using the same password “password” for all their accounts. Maybe in twenty years or so, we will be remembering these good old days of the “Internet Freedom” with romantic nostalgia, but now we have to face the harsh reality of the world where nobody is going to protect our personal information for us.

This, by the way, reminds me about another phenomenon of the Wild West era: snake oil peddlers. Unfortunately, quite a few security companies now attempt to capitalize on the data breach fear in a similar way. Instead of providing customers with the means to protect their credentials, they offer instead such services like “pay to see whether your account has been stolen”. And these services aren’t cheap.

Surely, these companies need to earn money just like everyone else, but charging people for such useless information is dubious at best. I’m not even going to mention the fact that there might be even services out there that are essentially good old phishing sites, which would collect your credentials and use them for malicious purposes.

As a famous Russian novel “The Twelve Chairs” states, mocking a common propaganda slogan of the early Soviet period: “Assistance to drowning persons is in the hands of those persons themselves.” I’ve published a blog post some time ago, outlining a list of simple rules one should follow to protect themselves from the consequences of a data breach: create long and complex passwords, do not reuse the same password for several sites, invest in a good secure password manager, look for sites that support two-factor authentication and so on. Of course, this won’t prevent future breaches from happening (apparently, nothing can), but it will help minimize the consequences: in the worst case, only one of your accounts will be compromised, not all of them.

Whenever you hear that a website you’re using has been hacked, you no longer have to wonder whether your credentials have been stolen or not, you simply assume the worst and then spend a minute to change your password and stay assured that the hackers have no use for your old credentials anymore. This way, you’re not only avoiding exposure to “CyberVors”, but also don’t let “CyberZhuliks” (cyber fraudsters) make money by selling you their useless services.


Posted in Security | 1 comment

Operation Emmental: another nail in the coffin of SMS-based two-factor authentication

23.07.2014 by Alexei Balaganski

On Tuesday, security company Trend Micro has unveiled a long and detailed report on “Operation Emmental”, an ongoing attack on online banking sites in several countries around the world. This attack is able to bypass the popular mTAN two-factor authentication scheme, which uses SMS messages to deliver transaction authorization numbers. There are very few details revealed about the scale of the operation, but apparently the attack has been first detected in February and has affected over 30 banking institutions in Germany, Austria, Switzerland, as well as Sweden and Japan. The hackers supposedly got away with millions stolen from both consumer and commercial bank accounts.

Now, this is definitely not the first time when hackers could defeat SMS-based two-factor authentication. Trojans designed to steal mTAN codes directly from mobile phones first appeared in 2010. Contrary to a popular belief, these Trojans are not targeting only Android phones: in fact, the most widespread one, ZeuS-in-the-Mobile, has been discovered on various mobile platforms, including Android, Symbian, Blackberry and Windows Mobile. In 2012, an attack campaign dubbed “Eurograbber” has successfully stolen over 36 million euros from banks in Italy, Spain and the Netherlands. Numerous smaller-scale attacks have been uncovered by security researchers as well. So, what exactly is new and different about the Emmental attack?

First it’s necessary to explain in a few words how a typical attack like Eurograbber actually works.

  1. Using traditional methods like phishing emails or compromised web sites, hackers lure a user to click a link and download a Windows-based Trojan onto their computer. This Trojan will run in the background and wait for the user to visit their online banking site.
  2. As soon as the Trojan detects a known banking site, it will inject its own code into the web page. This code can, for example, display a “security advice” instructing the customer to enter their mobile phone number.
  3. As soon as the hackers have a phone number, an SMS message with a link to a mobile Trojan is sent to it and the customer is instructed to install the malicious SMS-grabbing app on their phone.
  4. By having both customer’s online banking PIN and SMS TAN, hackers can easily initiate a fraudulent transaction, transferring the money from customer’s account.

It’s quite obvious that such a scheme can only work when both PC and mobile Trojans operate in parallel, coordinating their actions through a C&C server run by hackers. This means that it can also be relatively easily disrupted simply by using an antivirus, which would detect and disable the Trojan. Another method is deploying special software on the banking site, which detects and prevents web page injections.

The hackers behind the Emmental attack are using a different approach. Instead of delivering a Trojan to a customer’s computer, they are using a small agent that masks as a Windows updater. Upon start, this program makes changes to local DNS settings, replacing IP addresses of known online banking sites with the address of a server controlled by hackers. Additionally, it installs a new root SSL certificate, which forces browsers to consider this hacked server a trusted one. After that, the program deletes itself, leaving no traces of malware on the computer.

The rest of the attack is similar to the one described above, but with a twist: the user never connects to the real banking site again, all communications will take place with the fraudulent server. This deception can continue for a long time, and only after receiving a monthly statement from the bank the user would find out that their account has been cleared of all money.

In other words, while Emmental is not the first attack on mTAN infrastructure, it’s an important milestone demonstrating that hackers are actively working on new methods of defeating it, and that existing solutions that are supposed to make banks more resilient against this type of attack are much less effective than believed. SMS-based two-factor authentication has been compromised and should no longer be considered a strong authentication method. The market already offers a broad range of solutions from smartcards and OTP tokens to Mobile ID and smartphone apps. It’s really time to move on.


Posted in Mobile | No comments

What’s the deal with the IBM/Apple deal?

18.07.2014 by Alexei Balaganski

So, unless you’ve been hiding under a rock this week, you’ve definitely heard about a historical global partnership deal forged between IBM and Apple this Tuesday. The whole Internet’s been abuzz for the last few days, discussing what long-term benefits the partnership will bring to both parties, as well as guessing who will be the competitors that will suffer the most from it.

Different publications would name Microsoft, Google, Oracle, SAP, Salesforce and even Blackberry as the companies that the deal was primary targeted against. Well, at least for BlackBerry this could indeed be one of the last nails in the coffin, as their shares have plummeted after the announcement and the trend seems to be long-term. IBM’s and Apple’s shares rose unsurprisingly, however, financial analysts don’t seem to be too impressed (in fact, some recommend selling IBM stocks). This is, however, not the point of my post.

Apple and IBM have a history of bitter rivalry. 30 years ago, when Apple unveiled their legendary Big Brother commercial, it was a tiny contender against IBM’s domination on the PC market. How times have changed! Apple has since grown into the largest player on mobile device market with market capitalization several times larger than IBM’s. IBM has sold their PC hardware business to Lenovo years ago and is currently concentrated on enterprise software, cloud infrastructure and big data analytics and consulting businesses. So, they are no competitors anymore, but can we really consider them equal partners? Apple’s cash reserves continue to grow, and IBM’s revenues have been declining over the last two years. After losing a $600M contract with US government to AWS last year, a partnership with Apple is a welcome change for them.

So, what’s in this deal, anyway? In short, it includes the following:

  • IBM introduces its enterprise software and services platform, as well as a set of over 100 industry-specific enterprise apps specifically designed and optimized for Apple devices. The newly announced MobileFirst platform is technically based on IBM’s existing MaaS360 mobile device management platform and BlueMix, a service development platform for IBM cloud, but newly packaged and optimized specifically for iOS 8.
  • With IBM’s help, Apple’s existing customer support service AppleCare will be extended to the enterprise customers. IBM will take over the support services for their Apple-tailored enterprise solutions, as well as provide on-site support for iOS and Mac devices.
  • IBM will become Apple’s first worldwide distributor and reseller, offering procurement and reselling of devices, security and activation, and other managed services.

For Apple, this deal marks their renewed attempt to get a better hold of the enterprise market. It’s well known that Apple has never been successful in this, and whether it was because of ignoring enterprise needs or simply because of inability to develop the necessary services in-house, can be debated. This time, however, Apple is bringing a partner with a lot of experience and a large portfolio of existing enterprise services (notorious, however for their consistently bad user experience). Could an exclusive combination of a new shiny mobile UI with a proven third party backend finally change the market situation in Apple’s favor? Personally, I’m somewhat skeptical: although a better user experience does increase productivity and would be a welcome change for many enterprises, we’re still far away from a mobile-only world, and UI consistency across mobile and desktop platforms is a more important factor than a shiny design. In any case, the biggest thing that matters for Apple is the possibility to sell more devices.

For IBM the deal looks even less transparent. Granted, we do not know the financial details, but judging by how vehemently their announcement stated that they are “not just a channel partner for Apple”, many analysts do suspect that reselling Apple devices could be a substantial part of IBM’s profit from the partnership. Another important point is, of course, that IBM cannot afford to maintain a truly exclusive iOS-only platform. Sure, iOS is still a dominant platform on the market, but its share is far from 100%. Actually, it is already decreasing and will probably continue to decrease in the future, as other platforms will gain their market shares. Android’s been growing steadily during the last year, and it’s definitely too early to dismiss Windows Phone (remember how people were trying to dismiss Xbox years ago?). So, IBM must continue to support all other platforms with their products such as MaaS360 and can only rely on additional services to support the notion of iOS exclusivity. In any case, the partnership will definitely bring new revenue from consulting, support and cloud services, however it’s not easy to say how much Apple will actually contribute to that.

So, what about the competitors? One thing that at least several publications seem to ignore is that those companies that are supposed to suffer from the new partnership are operating on several completely different markets and comparing them to each other is like comparing apples to oranges.

For example, Apple does not need IBM’s assistance to trump BlackBerry as a rival mobile device vendor. But applying the same logic to Microsoft’s Windows phone platform would be a big mistake. Surely, their current share in the mobile hardware market is quite small (not on every market, by the way: in Germany they have over 10% and growing), but to claim that Apple/IBM will drive Microsoft out of enterprise service business is simply ridiculous. In fact, Microsoft is a dominant player there with products like Office 365 and Azure Active Directory and it’s not going anywhere yet.

Apparently, SAP CEO Bill McDermott isn’t too worried about the deal as well. SAP is already offering 300 enterprise apps for iOS Platform and claims to be years ahead of its competitors in the area of analytics software.

As for Google – well, they do not make money from selling mobile devices. Everything Google does is designed to lure more users into their online ecosystem, and although Android is an important part of their strategy, it’s by no means the only one. Google services are just as readily available on Apple devices, after all.

Anyway, the most important question we should ask isn’t about Apple’s or IBM’s, but about our own strategies. Does the new IBM/Apple partnership has enough impact to make an organization reconsider its current MDM, BYOD or security strategy? And the answer is obviously “no”. BYOD is by definition heterogeneous and any solution deployed by an organization for managing mobile devices (and more importantly, access to corporate information from those devices) that’s locked on a single platform is simply not a viable option. Good design may be good business, but it is not the most important factor when the business is primarily about enterprise information management.


Posted in Mobile | No comments
Services
Categories
© 2014 Alexei Balaganski, KuppingerCole