GlobalSign acquires Ubisecure, plans to win the IoE market

30.09.2014 by Alexei Balaganski

GlobalSign, one of the world’s biggest certificate authorities and a leading provider of digital identity services, has announced today that it has acquired Ubisecure, a Finnish privately held software development company specializing in Identity and Access Management solutions.

Last year, KuppingerCole has recognized Ubisecure as a product leader in our Leadership Compass on Access Management and Federation. Support for a broad range of authentication methods including national ID cards and banking cards, as well as integrated identity management capabilities with configurable registration workflows have been noted as the product’s strengths. However, it is the solution’s focus on enabling identity services on a large scale, targeted at governments and service providers, which KuppingerCole has noted as Ubisecure’s primary strength.

Unfortunately, until recently the Helsinki-based company has only been present in EMEA (mainly in the Nordic countries), obviously lacking resources to maintain a strong partner network. GlobalSign’s large worldwide presence with 9 international offices and over 5000 reseller partners provides a unique possibility to bring Ubisecure’s technology to a global market quickly and with little effort.

GlobalSign, established in 1996, is one of the oldest and biggest, as well as reportedly the fastest growing certificate authorities on the market. After becoming a part of the Japanese group of companies GMO Internet Inc. in 2006, GlobalSign has been steadily expanding its enterprise presence with services like enterprise PKI, cloud-based managed SSL platform, and strategic collaborations with cloud service providers. With the acquisition of Ubisecure, the company is launching its new long-term strategy of becoming a leading provider of end-to-end identity services for smart connected devices, powering the so-called Internet of Everything.

Market analysts currently estimate that up to 50 billion of such devices (or simply “things”) will be connecting to the Internet within the next 10 years. This may well be the largest technology market in history, with over $14 trillion at stake. Needless to say, the new trend brings new critical challenges that have to be addressed, such as device security and malware protection, however, probably the biggest of all is going to be providing identity services on a massive scale, mediating trust for billions on online transactions between people and “things” every minute and ensuring safety of e-commerce, communications, and content delivery.

A company that manages to bring a service with such capabilities to the market first will definitely be in a very attractive position, and GlobalSign, with their strong background in identity-related solutions, massive existing customer base and a large partner network, is aspiring to grab that position by making Ubisecure’s innovative technology available globally. Time will tell how well they can compete against technological giants on the market, as well as against other API vendors with strong IAM background (Ping Identity and CA / Layer 7 come to mind). Still, recognizing a rare combination of innovative technology and solid market presence, we believe them to be a player in the market that is definitely worth looking at.


Posted in Security | No comments

First Heartbleed, now Shellshock?

29.09.2014 by Alexei Balaganski

Half a year has passed since the discovery of the dreaded Heardbleed bug, and the shock of that incident, which many have dubbed the most serious security flaw in years, has finally begun to wear off. Then the security community has been shocked again last week, when details of a new critical vulnerability in another widely used piece of software have been made public after the initial embargo.

Apparently, Bash, arguably the most popular Unix shell software used on hundreds of millions of servers, personal computers, and network devices, contains a critical bug in the way it’s processing environment variables, which causes unintentional execution of system commands stored in those variables (you can find a lot of articles explaining the details, ranging from pretty simple to deeply technical). Needless to say, this provides an ample opportunity for hackers to run malicious commands on affected machines, whether they are connected to the network or not. What’s worse, the bug has remained unnoticed for over twenty years, which means that huge numbers of legacy systems are affected as well (as opposed to Heartbleed, which was caused by a bug in a recent version of OpenSSL).

Given the huge number of affected devices, many security researchers have already called Shellshock “bigger than Heartbleed”. In my opinion, however, comparing these two problems directly isn’t that simple. The biggest problem with the Heartbleed bug was that it has affected even those companies that have been consistently following security best practices, simply because the most important security tool itself was flawed. Even worse, those who failed to patch their systems regularly and were still using an old OpenSSL version were not affected.

Shellshock bug, however, is different, since Bash itself, being simply a command-line tool for system administrators, is usually not directly exposed to the Internet, and the vulnerability can only be exploited through other services. In fact, if your IT staff has been following reasonably basic security guidelines, the impact on your network will already be minimal, and with a few additional steps can be prevented completely.

The major attack vector for this vulnerability are naturally CGI scripts. Although CGI is a long outdated technology, which, quite frankly, has no place on a modern web server, it’s still found on a lot of public web servers. For example, the popular Apache web server has a CGI module enabled by default, which means that hackers can use Shellshock bug as a new means to deploy botnet clients on web servers, steal system passwords and so on. There have already been numerous reports about attacks exploiting Shellshock bug in the wild. Researchers also report that weaknesses in DHCP clients or SSH servers can potentially be exploited as well, however this requires special conditions to be met and can be easily prevented by administrators.

So, what are our recommendations on dealing with Shellshock bug?

For consumers:

First of all, you should check whether your computers or network devices are affected by the bug at all. Vulnerable are computers running different Unix flavors, most importantly many Linux distributions and OS X. Obviously, Windows machines are not affected unless they have Cygwin software installed. Most embedded network devices, such as modems and routers, although Linux-based, use a different shell, BusyBox, which doesn’t have the bug. As for mobile devices, stock iOS and Android do not contain Bash shell, but jailbroken iOS devices and custom Android firmwares may have it installed as well.

A simple test for checking whether your shell is vulnerable is this command:

env X="() { :;} ; echo vulnerable" /bin/sh -c "echo hello"

If you see “vulnerable” after running it, you know you are and you should immediately look for a security update. Many vendors have already issued patches for their OS distributions (although Apple is still working on an official patch, there are instructions available for fixing the problem DIY-style).

For network administrators:

Obviously, you should install security updates as well, but to stop there would not be a good idea. Although a series of patches for currently described Bash vulnerability has already been issued, researchers warn that Bash has never been designed for security and that new vulnerabilities can be discovered in it later. A reasonable, if somewhat drastic consideration would be to replace Bash on your servers with a different shell, since just about every other shell does not interpret commands in environment variables and is therefore inherently invulnerable to this exploit.

Another important measure would be to check all network services that can interact with Bash and harden their configurations appropriately. This includes, for example, the ForceCommand feature in OpenSSH.

Last but not the least, you should make sure that your network security tools are updated to recognize the new attack. Security vendors are already working on adding new tests to their software.

For web application developers:

Do not use CGI. Period.

If you are stuck with a legacy application you still have to maintain, you should at least put it behind some kind of a “sanitizing proxy” service that would filter out requests containing malicious environment variables. Many vendors offer specialized solutions for web application security, however, budget solutions using open source tools like nginx are possible as well.

So, if Shellshock bug can be fixed so easily, why are security researchers so worried about it? The main reason is a sheer number of legacy devices that will never be patched and will remain exposed to the exploit for years. Another burning question for IT departments is: how long hackers (or worse, NSA) have been aware of the bug and for how long they could have been secretly exploiting it? Remember, the upper limit for this guess is 22 years!

And of course, in even longer perspective, the problem raises a lot of new questions regarding the latest IT fad: the Internet of Things. Now that we already have smart fridges and smart cars and will soon have smart locks and smart thermostats installed everywhere, how can we make sure that all these devices remain secure in the long term? Vendors predict that in 10 years there will be over 50 billion “things” connected to a global network. Can you imagine patching 50 billion Bash installations? Can you afford not patching your door lock? Will you be able to install an antivirus on your car? Looks like we need to have a serious talk with IoT vendors. How about next year at our European Identity and Cloud Conference?


Posted in Security | No comments

Real-time Security Intelligence: history, challenges, trends

02.09.2014 by Alexei Balaganski

Information security is just as old as Information Technology itself. As soon as organizations began to depend on IT systems to run their business processes and to store and process business information, it has become necessary to protect these systems from malicious attacks. First concepts of tools for detecting and fighting off intrusions into computer networks were developed in early 1980s, and in the following three decades security analytics has evolved through several different approaches, reflecting the evolution of IT landscape as well as changing business requirements.

First-generation security tools – firewalls and intrusion detection and prevention systems (IDS/IPS) – have essentially been solutions for perimeter protection. Firewalls were traditionally deployed on the edge of a trusted internal network and were meant to prevent attacks from the outside world. First firewalls were simply packet filters that were effective for blocking known types of malicious traffic or protecting from known weaknesses in network services. Later generation of application firewalls can understand certain application layer protocols and thus provide additional protection for specific applications: mitigate cross-site scripting attacks on websites, protect databases from SQL injections, perform DLP functions, etc. Intrusion detection systems can be deployed within networks, but old signature-based systems were only capable of reliably detecting known threats and later statistical anomaly-based solutions were known to generate an overwhelming number of false alerts. In general, tuning an IDS for a specific network was always a difficult and time-consuming process.

These traditional tools are still widely deployed by many organizations and in certain scenarios serve as a useful part of enterprise security infrastructures, but recent trends in the IT industry have largely made them obsolete. Continued deperimeterization of corporate networks because of adoption of cloud and mobile services, as well as emergence of many new legitimate communication channels with external partners has made the task of protecting sensitive corporate information more and more difficult. The focus of information security has gradually shifted from perimeter protection towards detection and defense against threats within corporate networks.

The so-called Advanced Persistent Threats usually involve multiple attack vectors and consist of several covert stages. These attacks may go on undetected for months and cause significant damage for unsuspecting organizations. Often they are first uncovered by external parties, adding reputation damage to financial losses. A well-planned APT may exploit several different vulnerabilities within the organization: an unprotected gateway, a bug in an outdated application, a Zero-Day attack exploiting a previously unknown vulnerability and even social engineering, targeting the human factor often neglected by IT security.

By the mid-2000s, it was obvious that efficient detection and defense against these attacks requires a completely new approach towards network security. The need to analyze and correlate security incidents from multiple sources, to manage a large number of alerts and to be able to perform forensic analysis has led to development of a new organizational concept of Security Operations Center (SOC). An SOC is a single location where a team of experts is monitoring security-related events of entire enterprise information systems and taking actions against detected threats. Many large enterprises have established their own SOCs and for smaller organizations that cannot afford considerable investments and maintaining a skilled security staff on their own, such services are usually offered as a Managed Security Service.

The underlying technological platform of a security operations center is SIEM: Security Information and Event Management – a set of software and services for gathering, analyzing and presenting information from various sources, such as network devices, applications, logging systems, or external intelligence sources. The term has been coined in 2005 and the concept has been quickly adopted by the market: currently there are over 60 vendors offering SIEM solutions in various forms. There was a lot of initial hype around the SIEM concept, as it was offered as a turnkey solution for all security-related problems mentioned above. The reality, however, has shown that, although SIEM solutions are very capable sets of tools for data aggregation, retention and correlation, as well as for monitoring, alerting and reporting of security incidents, they are still just tools, requiring a team of experts to deploy and customize and another team to run it on daily basis.

Although SIEM solutions are currently widely adopted by most large enterprises, there are several major challenges that, according to many information security officers, are preventing them from efficiently using them:

  • Current SIEM solutions require specially trained security operations experts to operate; many organizations simply do not have enough resources to maintain such teams.
  • Current SIEM solutions generate too many false positive alerts, forcing security teams to deal with overwhelming amounts of unnecessary information. Obviously, current correlation and anomaly detection algorithms are not efficient enough.
  • The degree of integration offered by current SIEM solutions is still insufficient to provide a truly single management console for all kinds of operations. Responding to a security incident may still require performing too many separate actions using different tools.

Another common shortcoming of current SIEM solutions is lack of flexibility when dealing with unstructured data. Since many of the products are based on relational databases, they enforce applying rigid schemas to collected information and do not scale well when dealing with large amounts of data. This obviously prevents them from efficiently detecting threats in real time.

Over the last couple of years, these challenges have led to the emergence of the “next-generation SIEM” or rather a completely new technology called Real-time Security Intelligence (RTSI). Although the market is still in its early stage, it is already possible to summarize the key differentiators of RTSI offerings from previous-generation SIEM tools:

  • Real-time or near real-time detection of threats that enables quick remediation before damage is done;
  • Possibility to correlate real-time and historical data from various sources, as well as apply intelligence from external security information services, thus detecting malicious operations as whole events, not separate alerts;
  • Small number of clearly actionable alarms by reducing the false positive rate, as well as introducing different risk levels for incidents;
  • Automated workflows for responding to detected threats, such as, for example, disrupting clearly identified malware attacks or submitting a suspicious event to a managed security service for further analysis.

The biggest technological breakthrough that made these solutions possible is Big Data analytics. The industry has finally reached the point, when business intelligence algorithms for large-scale data processing, previously affordable only to large corporations, have become commoditized. Utilizing readily available frameworks such as Apache Hadoop and inexpensive hardware, vendors are now able to build solutions for collecting, storing and analyzing huge amounts of unstructured data in real-time.

This makes it possible to combine real-time and historical analysis and identify new incidents as being related to others that occurred in the past. Combined with external security intelligence sources that provide current information about the newest vulnerabilities, this can greatly facilitate identification of ongoing APT attacks on the network. Having a large amount of historical data at hand also significantly simplifies initial calibration to the normal patterns of activity of a given network, which are then used to identify anomalies. Existing RTSI solutions are already capable of automated calibration with very little input required from administrators.

Alerting and reporting capabilities of RTSI solutions are also significantly improved. Big Data analytics technology can generate a small number of concise and clearly categorized alerts to allow even an inexperienced person to make a relevant decision, yet provides a forensic expert with much more details about the incident and its relations with other historical anomalies.

As mentioned above, the RTSI market is still in its early stage. There are many new offerings with various scopes of functionality from both established IT security vendors as well as startups available today or planned for release in near future. It is still difficult to predict in which direction the market will evolve and which features should be expected from an innovation leader. However, it is already clear that only the vendors that will offer complete solutions and not just set of tools will win the market. It is important to understand that Real-time Security Intelligence is more than just SIEM 2.0.

This article was originally published in the KuppingerCole Analysts’ View Newsletter. Also check out video statements of my colleagues Mike Small and Rob Newby on this topic.


Posted in Security | No comments

Did someone just steal my password?

07.08.2014 by Alexei Balaganski

Large-scale security breaches are nothing new. Last December we’ve heard about the American retail chain Target’s network hack, when over 40 million credit cards and 70 million addresses have been stolen. This May, eBay announced that hackers got away with more than 145 million of their customer data. And the trend doesn’t stop: despite of all the efforts of security researchers and government institutions, data breaches occur more frequently and get bigger and more costly. The average total cost of a data breach for a company is currently estimated at $3.5 million. The public has already heard about these breaches so often that it became a bit desensitized to them. However, the latest announcement from an American company Hold Security should definitely make even the laziest people sit up and take notice.

Apparently, a gang of cybercriminals from Russia, which the company dubbed CyberVor (“cyber thief” in Russian), have managed to amass the largest known collection of stolen credentials, over 1.2 billion passwords and more than 500 million email addresses! The company hasn’t revealed a lot of details, but these were not, of course, spoils of a single breach – the gang has allegedly compromised over 420 thousand websites over the course of several years. Still, the numbers are overwhelming: the whole collection contains over 4.5 billion records. Surely, I can be somewhere in that huge list, too? What can I do to prevent hackers from stealing my precious passwords? Can someone help me with that?

In a sense, we still live in the era of the Internet Wild West. No matter how often the passwords are proclaimed dead and how hard security vendors are trying to sell their alternative, more secure authentication solutions, no matter how long government commissions are discussing stricter regulations and larger fines for data breaches – way too many companies around the world are still storing their customers’ credentials in clear text and way too many users are still using the same password “password” for all their accounts. Maybe in twenty years or so, we will be remembering these good old days of the “Internet Freedom” with romantic nostalgia, but now we have to face the harsh reality of the world where nobody is going to protect our personal information for us.

This, by the way, reminds me about another phenomenon of the Wild West era: snake oil peddlers. Unfortunately, quite a few security companies now attempt to capitalize on the data breach fear in a similar way. Instead of providing customers with the means to protect their credentials, they offer instead such services like “pay to see whether your account has been stolen”. And these services aren’t cheap.

Surely, these companies need to earn money just like everyone else, but charging people for such useless information is dubious at best. I’m not even going to mention the fact that there might be even services out there that are essentially good old phishing sites, which would collect your credentials and use them for malicious purposes.

As a famous Russian novel “The Twelve Chairs” states, mocking a common propaganda slogan of the early Soviet period: “Assistance to drowning persons is in the hands of those persons themselves.” I’ve published a blog post some time ago, outlining a list of simple rules one should follow to protect themselves from the consequences of a data breach: create long and complex passwords, do not reuse the same password for several sites, invest in a good secure password manager, look for sites that support two-factor authentication and so on. Of course, this won’t prevent future breaches from happening (apparently, nothing can), but it will help minimize the consequences: in the worst case, only one of your accounts will be compromised, not all of them.

Whenever you hear that a website you’re using has been hacked, you no longer have to wonder whether your credentials have been stolen or not, you simply assume the worst and then spend a minute to change your password and stay assured that the hackers have no use for your old credentials anymore. This way, you’re not only avoiding exposure to “CyberVors”, but also don’t let “CyberZhuliks” (cyber fraudsters) make money by selling you their useless services.


Posted in Security | 1 comment

Operation Emmental: another nail in the coffin of SMS-based two-factor authentication

23.07.2014 by Alexei Balaganski

On Tuesday, security company Trend Micro has unveiled a long and detailed report on “Operation Emmental”, an ongoing attack on online banking sites in several countries around the world. This attack is able to bypass the popular mTAN two-factor authentication scheme, which uses SMS messages to deliver transaction authorization numbers. There are very few details revealed about the scale of the operation, but apparently the attack has been first detected in February and has affected over 30 banking institutions in Germany, Austria, Switzerland, as well as Sweden and Japan. The hackers supposedly got away with millions stolen from both consumer and commercial bank accounts.

Now, this is definitely not the first time when hackers could defeat SMS-based two-factor authentication. Trojans designed to steal mTAN codes directly from mobile phones first appeared in 2010. Contrary to a popular belief, these Trojans are not targeting only Android phones: in fact, the most widespread one, ZeuS-in-the-Mobile, has been discovered on various mobile platforms, including Android, Symbian, Blackberry and Windows Mobile. In 2012, an attack campaign dubbed “Eurograbber” has successfully stolen over 36 million euros from banks in Italy, Spain and the Netherlands. Numerous smaller-scale attacks have been uncovered by security researchers as well. So, what exactly is new and different about the Emmental attack?

First it’s necessary to explain in a few words how a typical attack like Eurograbber actually works.

  1. Using traditional methods like phishing emails or compromised web sites, hackers lure a user to click a link and download a Windows-based Trojan onto their computer. This Trojan will run in the background and wait for the user to visit their online banking site.
  2. As soon as the Trojan detects a known banking site, it will inject its own code into the web page. This code can, for example, display a “security advice” instructing the customer to enter their mobile phone number.
  3. As soon as the hackers have a phone number, an SMS message with a link to a mobile Trojan is sent to it and the customer is instructed to install the malicious SMS-grabbing app on their phone.
  4. By having both customer’s online banking PIN and SMS TAN, hackers can easily initiate a fraudulent transaction, transferring the money from customer’s account.

It’s quite obvious that such a scheme can only work when both PC and mobile Trojans operate in parallel, coordinating their actions through a C&C server run by hackers. This means that it can also be relatively easily disrupted simply by using an antivirus, which would detect and disable the Trojan. Another method is deploying special software on the banking site, which detects and prevents web page injections.

The hackers behind the Emmental attack are using a different approach. Instead of delivering a Trojan to a customer’s computer, they are using a small agent that masks as a Windows updater. Upon start, this program makes changes to local DNS settings, replacing IP addresses of known online banking sites with the address of a server controlled by hackers. Additionally, it installs a new root SSL certificate, which forces browsers to consider this hacked server a trusted one. After that, the program deletes itself, leaving no traces of malware on the computer.

The rest of the attack is similar to the one described above, but with a twist: the user never connects to the real banking site again, all communications will take place with the fraudulent server. This deception can continue for a long time, and only after receiving a monthly statement from the bank the user would find out that their account has been cleared of all money.

In other words, while Emmental is not the first attack on mTAN infrastructure, it’s an important milestone demonstrating that hackers are actively working on new methods of defeating it, and that existing solutions that are supposed to make banks more resilient against this type of attack are much less effective than believed. SMS-based two-factor authentication has been compromised and should no longer be considered a strong authentication method. The market already offers a broad range of solutions from smartcards and OTP tokens to Mobile ID and smartphone apps. It’s really time to move on.


Posted in Mobile | No comments

What’s the deal with the IBM/Apple deal?

18.07.2014 by Alexei Balaganski

So, unless you’ve been hiding under a rock this week, you’ve definitely heard about a historical global partnership deal forged between IBM and Apple this Tuesday. The whole Internet’s been abuzz for the last few days, discussing what long-term benefits the partnership will bring to both parties, as well as guessing who will be the competitors that will suffer the most from it.

Different publications would name Microsoft, Google, Oracle, SAP, Salesforce and even Blackberry as the companies that the deal was primary targeted against. Well, at least for BlackBerry this could indeed be one of the last nails in the coffin, as their shares have plummeted after the announcement and the trend seems to be long-term. IBM’s and Apple’s shares rose unsurprisingly, however, financial analysts don’t seem to be too impressed (in fact, some recommend selling IBM stocks). This is, however, not the point of my post.

Apple and IBM have a history of bitter rivalry. 30 years ago, when Apple unveiled their legendary Big Brother commercial, it was a tiny contender against IBM’s domination on the PC market. How times have changed! Apple has since grown into the largest player on mobile device market with market capitalization several times larger than IBM’s. IBM has sold their PC hardware business to Lenovo years ago and is currently concentrated on enterprise software, cloud infrastructure and big data analytics and consulting businesses. So, they are no competitors anymore, but can we really consider them equal partners? Apple’s cash reserves continue to grow, and IBM’s revenues have been declining over the last two years. After losing a $600M contract with US government to AWS last year, a partnership with Apple is a welcome change for them.

So, what’s in this deal, anyway? In short, it includes the following:

  • IBM introduces its enterprise software and services platform, as well as a set of over 100 industry-specific enterprise apps specifically designed and optimized for Apple devices. The newly announced MobileFirst platform is technically based on IBM’s existing MaaS360 mobile device management platform and BlueMix, a service development platform for IBM cloud, but newly packaged and optimized specifically for iOS 8.
  • With IBM’s help, Apple’s existing customer support service AppleCare will be extended to the enterprise customers. IBM will take over the support services for their Apple-tailored enterprise solutions, as well as provide on-site support for iOS and Mac devices.
  • IBM will become Apple’s first worldwide distributor and reseller, offering procurement and reselling of devices, security and activation, and other managed services.

For Apple, this deal marks their renewed attempt to get a better hold of the enterprise market. It’s well known that Apple has never been successful in this, and whether it was because of ignoring enterprise needs or simply because of inability to develop the necessary services in-house, can be debated. This time, however, Apple is bringing a partner with a lot of experience and a large portfolio of existing enterprise services (notorious, however for their consistently bad user experience). Could an exclusive combination of a new shiny mobile UI with a proven third party backend finally change the market situation in Apple’s favor? Personally, I’m somewhat skeptical: although a better user experience does increase productivity and would be a welcome change for many enterprises, we’re still far away from a mobile-only world, and UI consistency across mobile and desktop platforms is a more important factor than a shiny design. In any case, the biggest thing that matters for Apple is the possibility to sell more devices.

For IBM the deal looks even less transparent. Granted, we do not know the financial details, but judging by how vehemently their announcement stated that they are “not just a channel partner for Apple”, many analysts do suspect that reselling Apple devices could be a substantial part of IBM’s profit from the partnership. Another important point is, of course, that IBM cannot afford to maintain a truly exclusive iOS-only platform. Sure, iOS is still a dominant platform on the market, but its share is far from 100%. Actually, it is already decreasing and will probably continue to decrease in the future, as other platforms will gain their market shares. Android’s been growing steadily during the last year, and it’s definitely too early to dismiss Windows Phone (remember how people were trying to dismiss Xbox years ago?). So, IBM must continue to support all other platforms with their products such as MaaS360 and can only rely on additional services to support the notion of iOS exclusivity. In any case, the partnership will definitely bring new revenue from consulting, support and cloud services, however it’s not easy to say how much Apple will actually contribute to that.

So, what about the competitors? One thing that at least several publications seem to ignore is that those companies that are supposed to suffer from the new partnership are operating on several completely different markets and comparing them to each other is like comparing apples to oranges.

For example, Apple does not need IBM’s assistance to trump BlackBerry as a rival mobile device vendor. But applying the same logic to Microsoft’s Windows phone platform would be a big mistake. Surely, their current share in the mobile hardware market is quite small (not on every market, by the way: in Germany they have over 10% and growing), but to claim that Apple/IBM will drive Microsoft out of enterprise service business is simply ridiculous. In fact, Microsoft is a dominant player there with products like Office 365 and Azure Active Directory and it’s not going anywhere yet.

Apparently, SAP CEO Bill McDermott isn’t too worried about the deal as well. SAP is already offering 300 enterprise apps for iOS Platform and claims to be years ahead of its competitors in the area of analytics software.

As for Google – well, they do not make money from selling mobile devices. Everything Google does is designed to lure more users into their online ecosystem, and although Android is an important part of their strategy, it’s by no means the only one. Google services are just as readily available on Apple devices, after all.

Anyway, the most important question we should ask isn’t about Apple’s or IBM’s, but about our own strategies. Does the new IBM/Apple partnership has enough impact to make an organization reconsider its current MDM, BYOD or security strategy? And the answer is obviously “no”. BYOD is by definition heterogeneous and any solution deployed by an organization for managing mobile devices (and more importantly, access to corporate information from those devices) that’s locked on a single platform is simply not a viable option. Good design may be good business, but it is not the most important factor when the business is primarily about enterprise information management.


Posted in Mobile | No comments

Amazon Web Services: One cloud to rule them all

14.07.2014 by Alexei Balaganski

Since launching its Web Services in 2006, Amazon has been steadily pushing towards global market leadership by continuously expanding the scope of their services, increasing scalability and maintaining low prices. Last week, Amazon has made another big announcement, introducing two major new services with funny names but a heavy impact on the future competition on the mobile cloud services market.

Amazon Zocalo (Spanish for “plinth”, “pedestal”) is a “fully managed, secure enterprise storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity”. In other words, it is one of the few user-facing AWS services and none other than a direct competitor to Box, Google Drive for Work and other products for enterprise document storage, sharing, and collaboration. Built on top of AWS S3 storage infrastructure, Zocalo provides a cross-platform solution (for laptops, iPads and Android tablets, including Amazon’s own Kindle Fire) for storing and accessing documents from anywhere, synchronizing files between devices, and sharing documents for review and feedback. Zocalo’s infrastructure provides at-rest and in-transit data encryption, centralized user management with Active Directory integration and, of course, ten AWS geo-regions to choose from in order to be compliant with local regulations.

Now, this does look like “another Box” at first sight, but with the ability to offer cloud resources cheaper than any other vendor, even with Zocalo’s limited feature set Amazon has all the chances to quickly gain a leading position in the market. First with Google announcing unlimited storage for their enterprise customers and now with Amazon driving prices further down, it means that cloud storage itself has very little market value left. Just being “another Box” is simply no longer sustainable, and only the biggest and those who can offer additional services on top of their storage infrastructure will survive in the long run.

Amazon Cognito (Italian for “known”) is a “simple user identity and data synchronization service that helps you securely manage and synchronize app data for your users across their mobile devices.” Cognito is a part of newly announced suite of AWS mobile services for mobile application developers, so it may not have caused a splash in the press like Zocalo, but it’s still worth mentioning here because of its potentially big impact on future mobile apps. First of all, by outsourcing identity management and profile synchronization between devices to Amazon, developers can free up resources to concentrate on the business functionality of their apps and thus bring them to market faster. Second, using the Cognito platform app developers are always working with temporary limited identities, safeguarding their AWS credentials as well as enabling uniform access control across different login providers. Thus, developers are implicitly led towards implementing security best practices in their applications.

Currently, Cognito is supporting several public identity providers, namely Amazon, Facebook and Google, however the underlying federation mechanism is standard-based (OAuth, OpenID Connect), so I cannot believe it won’t soon be extended to support enterprise identity providers as well.

Still, as much as an ex-developer in me feels excited about Cognito’s capabilities, an analyst in me cannot but think that Amazon could have gone a step further. Currently, each app vendor would maintain their own identity pool for their users. But why not give users control over their identities? Had Amazon made this additional step, it could eventually become the world’s largest Life Management Platform vendor! How’s that for an idea for Cognito 2.0?


Posted in Cloud | No comments

Is the latest attack on energy companies the next Stuxnet?

10.07.2014 by Alexei Balaganski

It really didn’t take long after my last blog post on SCADA security for an exciting new development to appear in the press. Several security vendors, including Symantec and F-Secure, have revealed new information about a hacker group “Dragonfly” (or alternatively “Energetic bear”) that has launched a massive cyber-espionage campaign against US and European companies mainly from the energy sector. Allegedly, the most recent development indicates that the hackers not just managed to compromise those companies for espionage, but possess the necessary capabilities for sabotage, disruption and damage to energy grids of several countries.

Previous reports show that the group known as “Energetic bear” has been operating since at least 2012, having highly qualified specialists based somewhere in Eastern Europe. Some experts go as far as to claim that the group has direct ties with Moscow, operating under control of the Russian secret services. So, it’s quite natural that many publications have already labeled Dragonfly as the next Stuxnet.

Now, as much as I love bold statements like this, I personally still find it difficult to believe it. I admit that I have not seen all the evidence yet, so let’s summarize what we do know already:

  • A hacker group “Energetic Bear” has been active in the cyber-espionage scene since at least 2011. Their previous targets, besides manufacturing and energy companies, include as diverse organizations as Asian universities, US healthcare providers, European IT organizations, etc.
  • The group often uses a remote access tool dubbed Havex, which appears to be their own custom development, but relies on other tools readily available on the black market as well.
  • Experts have analyzed multiple variations of the Havex tool and, judging by activity patterns, concluded that its developers are operating within the Eastern European time zone.
  • Until recently, the malware has been primarily distributed over “traditional” channels, such as spam mails and exploit kits.

However, the most recent development that has brought Dragonfly into the limelight is that the group has begun distributing the malware using the “watering hole” approach. Several ICS software vendor websites have been compromised, and their software installers available for download have been infected with Havex. It’s been reported that in one case, compromised software has been downloaded at least 250 times.

Since the sites belonged to notable vendors of programmable logic controllers used in managing wind turbines and other critical equipment, there could not be any other conclusion than “Russia is attacking our energy infrastructure”, right? Or could it?

Quite frankly, I fail to see any resemblance between Stuxnet and Dragonfly at all.

Stuxnet has been a highly targeted attack created specifically for one purpose: destroy Iranian nuclear enrichment industry. It contained modules developed specifically for a particular type of SCADA hardware. It has been so complex in its structure that experts are still not done analyzing it.

Dragonfly, on the other hand, is based on existing and widely used malware tools. It’s been targeting a wide array of different organizations – current reports show that it’s managed to compromise over 1000 companies. Also, the researchers that have discovered the operation could not find any traces of PLC-controlling payloads, the only purpose of the tool appears to be intelligence gathering. The claims of ties to the Russian secret services seem to be completely unsubstantiated as well.

So, does this all mean that there is not threat to our energy infrastructures after all? Of course, it does not! If anything, the whole Dragonfly story has again demonstrated the abysmal state of information security in the Industrial Control Systems around the world. Keep in mind, this time the cause of the attack wasn’t even weak security of an energy infrastructure. Protecting your website from hacking belongs to the basic “security hygiene” norms and does not require any specialized software, a traditional antivirus and firewall would do just fine. Unfortunately, even SCADA software vendors seem to share the relaxed approach towards security typical for the industry.

The fact that the Dragonfly case have been publicized so much is actually good news, even if not all publications are up to a good journalism standard. If this publicity leads to tighter regulations for ICS vendors and increases awareness of the risks among ICS end users, we all win at the end. Well, maybe except the hackers.


Posted in Security | No comments

Managing Users in Office 365

01.07.2014 by Alexei Balaganski

Office 365 is a popular cloud-based office productivity service built around Microsoft Office platform. Initially released in 2011, it has gone through a major upgrade in 2013 and is currently offered with different plans for home, small business, midsize and enterprise customers. Internally, Office 365 platform uses Microsoft Azure Active Directory for identity management and, with the exception of home and small business plans, offers three identity models for different user management scenarios. Recommended approach is to always start with the simplest model and transition to the more complicated one (or back) any time as requirements change. Let’s have a quick look at these models.

Cloud Identity

This is the simplest identity model and also the only one available for home and small business users. It’s typically used when an organization has no existing on-premise directory. In this case, user’s details are stored in the cloud directory only and can be managed using the standard Office 365 admin portal, which supports individual account management, as well as rudimentary batch processing using CSV files. Administrators can also reset user passwords or assign certain users (such as helpdesk staff) to perform this for other users. There is no way for a user to reset their own password.

Microsoft also provides a set of modules for Windows PowerShell to enable automation of common administration tasks. Another convenient option is using the Azure AD Graph API, which is a RESTful programming interface for developers to easily build applications integrating with Azure Active Directory.

Synchronized Identity

The majority of organizations that already have an on-premise directory will definitely choose this model, which relies on several tools to synchronize existing user accounts with the cloud directory. Since Microsoft has introduced password hash sync in 2013, this model can also provide a “single sign-on” user experience with the same password on-premises and in the cloud without the complexity of identity federation.

Microsoft provides several different tools for directory synchronization, starting with the old and proven DirSync tool suitable for organizations that have a single Active Directory. In the simplest case DirSync can be installed directly on the domain controller and does not require any additional infrastructure.

Next-generation Azure AD Sync tool is being designed to replace DirSync with many new functions including support for other directories such as LDAP or SQL. More complex scenarios are possible using Microsoft Forefront Identity Manager and different connectors (currently this is still the only solution to synchronize with non-AD directories).

It should be noted that identity synchronization with Office 365 has several limitations. For example, synchronizing a single on-premise directory with several cloud tenants leads to multiple problems and is actively discouraged by Microsoft. Therefore, an organization having several Office 365 subscriptions should consider merging them into one using third party tools before setting up synchronization.

Federated Identity

Since Synchronized Identity model covers the vast majority of use cases with significantly less administration effort, organizations should really consider deploying the federation infrastructure only for certain complex scenarios, for example:

  • An ADFS infrastructure is already in place or there are existing third party identity providers;
  • Special technical requirements, like smartcard authentication or support for password reset via Office 365 portal;
  • Special policy requirements, like login auditing requirement or regulations prohibiting password synchronization.

Important note: one should not forget that federation still requires user accounts to be synchronized with the on-premise directory, so one should never jump directly to federated model without setting up synchronization first. Azure Active Directory currently supports multiple protocols for identity federation with Active Directory Federation Services 2.0 or other third party Security Token Services. The most recent addition to this list has been SAML 2.0, which was announced in March 2014.

Microsoft has established “Works with Office 365 – Identity program”, which is a qualification for third party identity providers for federation with Office 365. A list of qualified providers is maintained here.

Unfortunately, current versions of Office desktop applications have a major incompatibility with many third party identity providers, since they only support the so called active authentication, which can only be accomplished using WS-Trust protocol. Until an update is released later in 2014, Microsoft officially only supports federation with AD FS 2.0 or with qualified third party providers from the list above.

Conclusion

Microsoft has gone a long way since the initial release of Office 365. Current generation of the Azure Active Directory enables different identity models that support nearly all possible usage scenarios. While there are still several major interoperability issues the company has to solve, unless you have a really unusual on-premise identity environment, you should be covered by one of the options above.

This article was originally published in the KuppingerCole Analysts’ View Newsletter.


Posted in Cloud | No comments

Will 2014 be the year of SCADA Security awareness?

30.06.2014 by Alexei Balaganski

If you have attended our European Identity and Cloud Conference this May, you have probably noticed that, as opposed to the previous years, a significantly bigger part of the agenda and a substantial number of expo stands has been devoted to practical “down to earth” aspects of IT security. Multifactor authentication, encryption technologies, source code analysis, even backup – many of those topics have been previously looked down upon by strategists as boring tasks for IT engineers.

Well, times have changed. Explosive growth of computing power and networks, continued erosion of enterprise perimeters, development of more and more complicated Advanced Persistent Threats – all these trends are bringing good old Information Security back to the front pages. Before Snowden revelations, not many have given serious thought to encrypted communications. Before Heartbleed, not many people actually knew what “static code analysis” means.

There is however one topic that I personally consider extremely important, which has not received enough limelight in the recent years. This, of source, is Industrial Control System security, more often referred to as SCADA security.

In layman’s terms, SCADA (supervisory control and data acquisition) is a system for monitoring and controlling industrial processes of different kinds. Over decades, SCADA systems have evolved into large-scale systems operating complexes of equipment over large distances. SCADA systems are widely utilized in manufacturing, oil and gas refining, power generation and distribution, water treatment, and also for controlling facilities like heating and air conditioning in buildings or ships. In other words, SCADA systems control a significant part of every nation’s critical infrastructure, which makes them an important target for that nation’s enemies.

Unfortunately, SCADA systems have historically never been designed with security in mind. Early systems were monolithic physically isolated systems without any network connectivity. Later generations were based on proprietary LAN protocols that usually lacked any kind of transport security or authentication. Modern (or I should rather say “current”) SCADA systems have evolved into large-scale decentralized systems with increased number of network connections. They are gradually shifting from proprietary protocols to open standards and becoming increasingly interconnected with office networks and the Internet. Many workstations and human-machine interfaces (HMI) are actually standard Windows PCs often running outdated and unpatched software. Programmable Logic Controllers (PLC), the actual components controlling physical processes, are even more vulnerable, since their software and network protocols historically lack any security. Until recently, both SCADA vendors and enterprises deploying the systems gave little consideration to security issues, more or less relying on security by obscurity.

Discovery of Stuxnet malware in 2010 has shattered that false feeling of safety. A piece of software on a USB drive planted at the Iranian nuclear facility by US and Israeli intelligent services was able to disrupt the PLCs that controlled nuclear material enrichment centrifuges and ultimately physically destroy them. This case was widely publicized four years ago and naturally has led to establishment of standards and guidelines for prevention of such incidents both in the public and private sectors in many countries.

However, it somehow failed to grab the general public’s attention as much as Snowden and Heartbleed did later. Sure, the press regularly reports about new vulnerabilities found in different ICS systems, like this one (from last week!) or this. Check out my favorite quote:

The poor security of such software was revealed by a project Mr Rios and a colleague undertook in which they sought to find 100 Scada bugs in 100 days.

“We ended up finding over 1,000 bugs in 100 days,” he said. “Scada software security simply hasn’t kept up with modern times. The security of software like iTunes is much more robust than the software supporting our critical infrastructure.”

So, what makes SCADA systems so difficult to secure? Many reasons, actually, that require completely different approaches to address them.

  1. As I already mentioned above, current SCADA systems have evolved into distributed systems based on open network standards and commodity software, so they are theoretically vulnerable to the same attack vectors as other corporate networks. However, their design has historically never addressed security and identity issues at all.
  2. Although many components of SCADA systems run Windows, standard endpoint protection solutions are not particularly suitable for them, because even a minor latency spike caused by malware database update may lead to a disruption of the manufacturing process. Addressing process continuity in anti-malware software requires substantial changes in its logic.
  3. Traditional detection and blocking techniques are obviously not applicable for specialized systems like PLCs. Development of specialized solutions for their protection requires tight collaboration with PLC vendors. The same is true for integration with existing control and monitoring modules of SCADA systems.
  4. The newest trend in SCADA development is following the current trends in IT in general: growing adoption of Cloud services, introducing the “Internet of Things” approach to system design, etc. This leads to rapid growth of complexity, since number of modules and connections between them increases exponentially. Of course, this enables even more new attack vectors.
  5. Growing political tensions between both developed countries and global terrorist organizations mean that critical infrastructures will more likely to become targets of cyber-attacks with expected catastrophic outcomes.

Security experts have been talking about doomsday scenarios as a consequence of a possible attack on a power grid or a chemical plant. Luckily, until now there predictions have never materialized, but the poor state of ICS security still makes them the proverbial sword of Damocles hanging over our heads. Statistics clearly show that the number and the level of sophistication of attacks on ICS is steadily increasing.

Yes, until now we have not experienced a successful cyber-attack on a critical infrastructure that would lead to an industrial disaster with human casualties. But can we be sure that it won’t happen tomorrow? Not really. Luckily, both government organizations and security vendors are already working on different approaches to address this threat – both in short-term and long-term perspectives. And I firmly believe that EIC could be a good meeting place for these specialists to discuss this topic. Maybe, next year already?

If you’re looking for more information on this topic, check out these KuppingerCole’s published research documents:


Posted in Security | No comments
Services
Categories
© 2014 Alexei Balaganski, KuppingerCole