02.09.2014 by Alexei Balaganski
Information security is just as old as Information Technology itself. As soon as organizations began to depend on IT systems to run their business processes and to store and process business information, it has become necessary to protect these systems from malicious attacks. First concepts of tools for detecting and fighting off intrusions into computer networks were developed in early 1980s, and in the following three decades security analytics has evolved through several different approaches, reflecting the evolution of IT landscape as well as changing business requirements.
First-generation security tools – firewalls and intrusion detection and prevention systems (IDS/IPS) – have essentially been solutions for perimeter protection. Firewalls were traditionally deployed on the edge of a trusted internal network and were meant to prevent attacks from the outside world. First firewalls were simply packet filters that were effective for blocking known types of malicious traffic or protecting from known weaknesses in network services. Later generation of application firewalls can understand certain application layer protocols and thus provide additional protection for specific applications: mitigate cross-site scripting attacks on websites, protect databases from SQL injections, perform DLP functions, etc. Intrusion detection systems can be deployed within networks, but old signature-based systems were only capable of reliably detecting known threats and later statistical anomaly-based solutions were known to generate an overwhelming number of false alerts. In general, tuning an IDS for a specific network was always a difficult and time-consuming process.
These traditional tools are still widely deployed by many organizations and in certain scenarios serve as a useful part of enterprise security infrastructures, but recent trends in the IT industry have largely made them obsolete. Continued deperimeterization of corporate networks because of adoption of cloud and mobile services, as well as emergence of many new legitimate communication channels with external partners has made the task of protecting sensitive corporate information more and more difficult. The focus of information security has gradually shifted from perimeter protection towards detection and defense against threats within corporate networks.
The so-called Advanced Persistent Threats usually involve multiple attack vectors and consist of several covert stages. These attacks may go on undetected for months and cause significant damage for unsuspecting organizations. Often they are first uncovered by external parties, adding reputation damage to financial losses. A well-planned APT may exploit several different vulnerabilities within the organization: an unprotected gateway, a bug in an outdated application, a Zero-Day attack exploiting a previously unknown vulnerability and even social engineering, targeting the human factor often neglected by IT security.
By the mid-2000s, it was obvious that efficient detection and defense against these attacks requires a completely new approach towards network security. The need to analyze and correlate security incidents from multiple sources, to manage a large number of alerts and to be able to perform forensic analysis has led to development of a new organizational concept of Security Operations Center (SOC). An SOC is a single location where a team of experts is monitoring security-related events of entire enterprise information systems and taking actions against detected threats. Many large enterprises have established their own SOCs and for smaller organizations that cannot afford considerable investments and maintaining a skilled security staff on their own, such services are usually offered as a Managed Security Service.
The underlying technological platform of a security operations center is SIEM: Security Information and Event Management – a set of software and services for gathering, analyzing and presenting information from various sources, such as network devices, applications, logging systems, or external intelligence sources. The term has been coined in 2005 and the concept has been quickly adopted by the market: currently there are over 60 vendors offering SIEM solutions in various forms. There was a lot of initial hype around the SIEM concept, as it was offered as a turnkey solution for all security-related problems mentioned above. The reality, however, has shown that, although SIEM solutions are very capable sets of tools for data aggregation, retention and correlation, as well as for monitoring, alerting and reporting of security incidents, they are still just tools, requiring a team of experts to deploy and customize and another team to run it on daily basis.
Although SIEM solutions are currently widely adopted by most large enterprises, there are several major challenges that, according to many information security officers, are preventing them from efficiently using them:
- Current SIEM solutions require specially trained security operations experts to operate; many organizations simply do not have enough resources to maintain such teams.
- Current SIEM solutions generate too many false positive alerts, forcing security teams to deal with overwhelming amounts of unnecessary information. Obviously, current correlation and anomaly detection algorithms are not efficient enough.
- The degree of integration offered by current SIEM solutions is still insufficient to provide a truly single management console for all kinds of operations. Responding to a security incident may still require performing too many separate actions using different tools.
Another common shortcoming of current SIEM solutions is lack of flexibility when dealing with unstructured data. Since many of the products are based on relational databases, they enforce applying rigid schemas to collected information and do not scale well when dealing with large amounts of data. This obviously prevents them from efficiently detecting threats in real time.
Over the last couple of years, these challenges have led to the emergence of the “next-generation SIEM” or rather a completely new technology called Real-time Security Intelligence (RTSI). Although the market is still in its early stage, it is already possible to summarize the key differentiators of RTSI offerings from previous-generation SIEM tools:
- Real-time or near real-time detection of threats that enables quick remediation before damage is done;
- Possibility to correlate real-time and historical data from various sources, as well as apply intelligence from external security information services, thus detecting malicious operations as whole events, not separate alerts;
- Small number of clearly actionable alarms by reducing the false positive rate, as well as introducing different risk levels for incidents;
- Automated workflows for responding to detected threats, such as, for example, disrupting clearly identified malware attacks or submitting a suspicious event to a managed security service for further analysis.
The biggest technological breakthrough that made these solutions possible is Big Data analytics. The industry has finally reached the point, when business intelligence algorithms for large-scale data processing, previously affordable only to large corporations, have become commoditized. Utilizing readily available frameworks such as Apache Hadoop and inexpensive hardware, vendors are now able to build solutions for collecting, storing and analyzing huge amounts of unstructured data in real-time.
This makes it possible to combine real-time and historical analysis and identify new incidents as being related to others that occurred in the past. Combined with external security intelligence sources that provide current information about the newest vulnerabilities, this can greatly facilitate identification of ongoing APT attacks on the network. Having a large amount of historical data at hand also significantly simplifies initial calibration to the normal patterns of activity of a given network, which are then used to identify anomalies. Existing RTSI solutions are already capable of automated calibration with very little input required from administrators.
Alerting and reporting capabilities of RTSI solutions are also significantly improved. Big Data analytics technology can generate a small number of concise and clearly categorized alerts to allow even an inexperienced person to make a relevant decision, yet provides a forensic expert with much more details about the incident and its relations with other historical anomalies.
As mentioned above, the RTSI market is still in its early stage. There are many new offerings with various scopes of functionality from both established IT security vendors as well as startups available today or planned for release in near future. It is still difficult to predict in which direction the market will evolve and which features should be expected from an innovation leader. However, it is already clear that only the vendors that will offer complete solutions and not just set of tools will win the market. It is important to understand that Real-time Security Intelligence is more than just SIEM 2.0.
This article was originally published in the KuppingerCole Analysts’ View Newsletter. Also check out video statements of my colleagues Mike Small and Rob Newby on this topic.
07.08.2014 by Alexei Balaganski
Large-scale security breaches are nothing new. Last December we’ve heard about the American retail chain Target’s network hack, when over 40 million credit cards and 70 million addresses have been stolen. This May, eBay announced that hackers got away with more than 145 million of their customer data. And the trend doesn’t stop: despite of all the efforts of security researchers and government institutions, data breaches occur more frequently and get bigger and more costly. The average total cost of a data breach for a company is currently estimated at $3.5 million. The public has already heard about these breaches so often that it became a bit desensitized to them. However, the latest announcement from an American company Hold Security should definitely make even the laziest people sit up and take notice.
Apparently, a gang of cybercriminals from Russia, which the company dubbed CyberVor (“cyber thief” in Russian), have managed to amass the largest known collection of stolen credentials, over 1.2 billion passwords and more than 500 million email addresses! The company hasn’t revealed a lot of details, but these were not, of course, spoils of a single breach – the gang has allegedly compromised over 420 thousand websites over the course of several years. Still, the numbers are overwhelming: the whole collection contains over 4.5 billion records. Surely, I can be somewhere in that huge list, too? What can I do to prevent hackers from stealing my precious passwords? Can someone help me with that?
In a sense, we still live in the era of the Internet Wild West. No matter how often the passwords are proclaimed dead and how hard security vendors are trying to sell their alternative, more secure authentication solutions, no matter how long government commissions are discussing stricter regulations and larger fines for data breaches – way too many companies around the world are still storing their customers’ credentials in clear text and way too many users are still using the same password “password” for all their accounts. Maybe in twenty years or so, we will be remembering these good old days of the “Internet Freedom” with romantic nostalgia, but now we have to face the harsh reality of the world where nobody is going to protect our personal information for us.
This, by the way, reminds me about another phenomenon of the Wild West era: snake oil peddlers. Unfortunately, quite a few security companies now attempt to capitalize on the data breach fear in a similar way. Instead of providing customers with the means to protect their credentials, they offer instead such services like “pay to see whether your account has been stolen”. And these services aren’t cheap.
Surely, these companies need to earn money just like everyone else, but charging people for such useless information is dubious at best. I’m not even going to mention the fact that there might be even services out there that are essentially good old phishing sites, which would collect your credentials and use them for malicious purposes.
As a famous Russian novel “The Twelve Chairs” states, mocking a common propaganda slogan of the early Soviet period: “Assistance to drowning persons is in the hands of those persons themselves.” I’ve published a blog post some time ago, outlining a list of simple rules one should follow to protect themselves from the consequences of a data breach: create long and complex passwords, do not reuse the same password for several sites, invest in a good secure password manager, look for sites that support two-factor authentication and so on. Of course, this won’t prevent future breaches from happening (apparently, nothing can), but it will help minimize the consequences: in the worst case, only one of your accounts will be compromised, not all of them.
Whenever you hear that a website you’re using has been hacked, you no longer have to wonder whether your credentials have been stolen or not, you simply assume the worst and then spend a minute to change your password and stay assured that the hackers have no use for your old credentials anymore. This way, you’re not only avoiding exposure to “CyberVors”, but also don’t let “CyberZhuliks” (cyber fraudsters) make money by selling you their useless services.
23.07.2014 by Alexei Balaganski
On Tuesday, security company Trend Micro has unveiled a long and detailed report on “Operation Emmental”, an ongoing attack on online banking sites in several countries around the world. This attack is able to bypass the popular mTAN two-factor authentication scheme, which uses SMS messages to deliver transaction authorization numbers. There are very few details revealed about the scale of the operation, but apparently the attack has been first detected in February and has affected over 30 banking institutions in Germany, Austria, Switzerland, as well as Sweden and Japan. The hackers supposedly got away with millions stolen from both consumer and commercial bank accounts.
Now, this is definitely not the first time when hackers could defeat SMS-based two-factor authentication. Trojans designed to steal mTAN codes directly from mobile phones first appeared in 2010. Contrary to a popular belief, these Trojans are not targeting only Android phones: in fact, the most widespread one, ZeuS-in-the-Mobile, has been discovered on various mobile platforms, including Android, Symbian, Blackberry and Windows Mobile. In 2012, an attack campaign dubbed “Eurograbber” has successfully stolen over 36 million euros from banks in Italy, Spain and the Netherlands. Numerous smaller-scale attacks have been uncovered by security researchers as well. So, what exactly is new and different about the Emmental attack?
First it’s necessary to explain in a few words how a typical attack like Eurograbber actually works.
- Using traditional methods like phishing emails or compromised web sites, hackers lure a user to click a link and download a Windows-based Trojan onto their computer. This Trojan will run in the background and wait for the user to visit their online banking site.
- As soon as the Trojan detects a known banking site, it will inject its own code into the web page. This code can, for example, display a “security advice” instructing the customer to enter their mobile phone number.
- As soon as the hackers have a phone number, an SMS message with a link to a mobile Trojan is sent to it and the customer is instructed to install the malicious SMS-grabbing app on their phone.
- By having both customer’s online banking PIN and SMS TAN, hackers can easily initiate a fraudulent transaction, transferring the money from customer’s account.
It’s quite obvious that such a scheme can only work when both PC and mobile Trojans operate in parallel, coordinating their actions through a C&C server run by hackers. This means that it can also be relatively easily disrupted simply by using an antivirus, which would detect and disable the Trojan. Another method is deploying special software on the banking site, which detects and prevents web page injections.
The hackers behind the Emmental attack are using a different approach. Instead of delivering a Trojan to a customer’s computer, they are using a small agent that masks as a Windows updater. Upon start, this program makes changes to local DNS settings, replacing IP addresses of known online banking sites with the address of a server controlled by hackers. Additionally, it installs a new root SSL certificate, which forces browsers to consider this hacked server a trusted one. After that, the program deletes itself, leaving no traces of malware on the computer.
The rest of the attack is similar to the one described above, but with a twist: the user never connects to the real banking site again, all communications will take place with the fraudulent server. This deception can continue for a long time, and only after receiving a monthly statement from the bank the user would find out that their account has been cleared of all money.
In other words, while Emmental is not the first attack on mTAN infrastructure, it’s an important milestone demonstrating that hackers are actively working on new methods of defeating it, and that existing solutions that are supposed to make banks more resilient against this type of attack are much less effective than believed. SMS-based two-factor authentication has been compromised and should no longer be considered a strong authentication method. The market already offers a broad range of solutions from smartcards and OTP tokens to Mobile ID and smartphone apps. It’s really time to move on.
18.07.2014 by Alexei Balaganski
So, unless you’ve been hiding under a rock this week, you’ve definitely heard about a historical global partnership deal forged between IBM and Apple this Tuesday. The whole Internet’s been abuzz for the last few days, discussing what long-term benefits the partnership will bring to both parties, as well as guessing who will be the competitors that will suffer the most from it.
Different publications would name Microsoft, Google, Oracle, SAP, Salesforce and even Blackberry as the companies that the deal was primary targeted against. Well, at least for BlackBerry this could indeed be one of the last nails in the coffin, as their shares have plummeted after the announcement and the trend seems to be long-term. IBM’s and Apple’s shares rose unsurprisingly, however, financial analysts don’t seem to be too impressed (in fact, some recommend selling IBM stocks). This is, however, not the point of my post.
Apple and IBM have a history of bitter rivalry. 30 years ago, when Apple unveiled their legendary Big Brother commercial, it was a tiny contender against IBM’s domination on the PC market. How times have changed! Apple has since grown into the largest player on mobile device market with market capitalization several times larger than IBM’s. IBM has sold their PC hardware business to Lenovo years ago and is currently concentrated on enterprise software, cloud infrastructure and big data analytics and consulting businesses. So, they are no competitors anymore, but can we really consider them equal partners? Apple’s cash reserves continue to grow, and IBM’s revenues have been declining over the last two years. After losing a $600M contract with US government to AWS last year, a partnership with Apple is a welcome change for them.
So, what’s in this deal, anyway? In short, it includes the following:
- IBM introduces its enterprise software and services platform, as well as a set of over 100 industry-specific enterprise apps specifically designed and optimized for Apple devices. The newly announced MobileFirst platform is technically based on IBM’s existing MaaS360 mobile device management platform and BlueMix, a service development platform for IBM cloud, but newly packaged and optimized specifically for iOS 8.
- With IBM’s help, Apple’s existing customer support service AppleCare will be extended to the enterprise customers. IBM will take over the support services for their Apple-tailored enterprise solutions, as well as provide on-site support for iOS and Mac devices.
- IBM will become Apple’s first worldwide distributor and reseller, offering procurement and reselling of devices, security and activation, and other managed services.
For Apple, this deal marks their renewed attempt to get a better hold of the enterprise market. It’s well known that Apple has never been successful in this, and whether it was because of ignoring enterprise needs or simply because of inability to develop the necessary services in-house, can be debated. This time, however, Apple is bringing a partner with a lot of experience and a large portfolio of existing enterprise services (notorious, however for their consistently bad user experience). Could an exclusive combination of a new shiny mobile UI with a proven third party backend finally change the market situation in Apple’s favor? Personally, I’m somewhat skeptical: although a better user experience does increase productivity and would be a welcome change for many enterprises, we’re still far away from a mobile-only world, and UI consistency across mobile and desktop platforms is a more important factor than a shiny design. In any case, the biggest thing that matters for Apple is the possibility to sell more devices.
For IBM the deal looks even less transparent. Granted, we do not know the financial details, but judging by how vehemently their announcement stated that they are “not just a channel partner for Apple”, many analysts do suspect that reselling Apple devices could be a substantial part of IBM’s profit from the partnership. Another important point is, of course, that IBM cannot afford to maintain a truly exclusive iOS-only platform. Sure, iOS is still a dominant platform on the market, but its share is far from 100%. Actually, it is already decreasing and will probably continue to decrease in the future, as other platforms will gain their market shares. Android’s been growing steadily during the last year, and it’s definitely too early to dismiss Windows Phone (remember how people were trying to dismiss Xbox years ago?). So, IBM must continue to support all other platforms with their products such as MaaS360 and can only rely on additional services to support the notion of iOS exclusivity. In any case, the partnership will definitely bring new revenue from consulting, support and cloud services, however it’s not easy to say how much Apple will actually contribute to that.
So, what about the competitors? One thing that at least several publications seem to ignore is that those companies that are supposed to suffer from the new partnership are operating on several completely different markets and comparing them to each other is like comparing apples to oranges.
For example, Apple does not need IBM’s assistance to trump BlackBerry as a rival mobile device vendor. But applying the same logic to Microsoft’s Windows phone platform would be a big mistake. Surely, their current share in the mobile hardware market is quite small (not on every market, by the way: in Germany they have over 10% and growing), but to claim that Apple/IBM will drive Microsoft out of enterprise service business is simply ridiculous. In fact, Microsoft is a dominant player there with products like Office 365 and Azure Active Directory and it’s not going anywhere yet.
Apparently, SAP CEO Bill McDermott isn’t too worried about the deal as well. SAP is already offering 300 enterprise apps for iOS Platform and claims to be years ahead of its competitors in the area of analytics software.
As for Google – well, they do not make money from selling mobile devices. Everything Google does is designed to lure more users into their online ecosystem, and although Android is an important part of their strategy, it’s by no means the only one. Google services are just as readily available on Apple devices, after all.
Anyway, the most important question we should ask isn’t about Apple’s or IBM’s, but about our own strategies. Does the new IBM/Apple partnership has enough impact to make an organization reconsider its current MDM, BYOD or security strategy? And the answer is obviously “no”. BYOD is by definition heterogeneous and any solution deployed by an organization for managing mobile devices (and more importantly, access to corporate information from those devices) that’s locked on a single platform is simply not a viable option. Good design may be good business, but it is not the most important factor when the business is primarily about enterprise information management.
14.07.2014 by Alexei Balaganski
Since launching its Web Services in 2006, Amazon has been steadily pushing towards global market leadership by continuously expanding the scope of their services, increasing scalability and maintaining low prices. Last week, Amazon has made another big announcement, introducing two major new services with funny names but a heavy impact on the future competition on the mobile cloud services market.
Amazon Zocalo (Spanish for “plinth”, “pedestal”) is a “fully managed, secure enterprise storage and sharing service with strong administrative controls and feedback capabilities that improve user productivity”. In other words, it is one of the few user-facing AWS services and none other than a direct competitor to Box, Google Drive for Work and other products for enterprise document storage, sharing, and collaboration. Built on top of AWS S3 storage infrastructure, Zocalo provides a cross-platform solution (for laptops, iPads and Android tablets, including Amazon’s own Kindle Fire) for storing and accessing documents from anywhere, synchronizing files between devices, and sharing documents for review and feedback. Zocalo’s infrastructure provides at-rest and in-transit data encryption, centralized user management with Active Directory integration and, of course, ten AWS geo-regions to choose from in order to be compliant with local regulations.
Now, this does look like “another Box” at first sight, but with the ability to offer cloud resources cheaper than any other vendor, even with Zocalo’s limited feature set Amazon has all the chances to quickly gain a leading position in the market. First with Google announcing unlimited storage for their enterprise customers and now with Amazon driving prices further down, it means that cloud storage itself has very little market value left. Just being “another Box” is simply no longer sustainable, and only the biggest and those who can offer additional services on top of their storage infrastructure will survive in the long run.
Amazon Cognito (Italian for “known”) is a “simple user identity and data synchronization service that helps you securely manage and synchronize app data for your users across their mobile devices.” Cognito is a part of newly announced suite of AWS mobile services for mobile application developers, so it may not have caused a splash in the press like Zocalo, but it’s still worth mentioning here because of its potentially big impact on future mobile apps. First of all, by outsourcing identity management and profile synchronization between devices to Amazon, developers can free up resources to concentrate on the business functionality of their apps and thus bring them to market faster. Second, using the Cognito platform app developers are always working with temporary limited identities, safeguarding their AWS credentials as well as enabling uniform access control across different login providers. Thus, developers are implicitly led towards implementing security best practices in their applications.
Currently, Cognito is supporting several public identity providers, namely Amazon, Facebook and Google, however the underlying federation mechanism is standard-based (OAuth, OpenID Connect), so I cannot believe it won’t soon be extended to support enterprise identity providers as well.
Still, as much as an ex-developer in me feels excited about Cognito’s capabilities, an analyst in me cannot but think that Amazon could have gone a step further. Currently, each app vendor would maintain their own identity pool for their users. But why not give users control over their identities? Had Amazon made this additional step, it could eventually become the world’s largest Life Management Platform vendor! How’s that for an idea for Cognito 2.0?
10.07.2014 by Alexei Balaganski
It really didn’t take long after my last blog post on SCADA security for an exciting new development to appear in the press. Several security vendors, including Symantec and F-Secure, have revealed new information about a hacker group “Dragonfly” (or alternatively “Energetic bear”) that has launched a massive cyber-espionage campaign against US and European companies mainly from the energy sector. Allegedly, the most recent development indicates that the hackers not just managed to compromise those companies for espionage, but possess the necessary capabilities for sabotage, disruption and damage to energy grids of several countries.
Previous reports show that the group known as “Energetic bear” has been operating since at least 2012, having highly qualified specialists based somewhere in Eastern Europe. Some experts go as far as to claim that the group has direct ties with Moscow, operating under control of the Russian secret services. So, it’s quite natural that many publications have already labeled Dragonfly as the next Stuxnet.
Now, as much as I love bold statements like this, I personally still find it difficult to believe it. I admit that I have not seen all the evidence yet, so let’s summarize what we do know already:
- A hacker group “Energetic Bear” has been active in the cyber-espionage scene since at least 2011. Their previous targets, besides manufacturing and energy companies, include as diverse organizations as Asian universities, US healthcare providers, European IT organizations, etc.
- The group often uses a remote access tool dubbed Havex, which appears to be their own custom development, but relies on other tools readily available on the black market as well.
- Experts have analyzed multiple variations of the Havex tool and, judging by activity patterns, concluded that its developers are operating within the Eastern European time zone.
- Until recently, the malware has been primarily distributed over “traditional” channels, such as spam mails and exploit kits.
However, the most recent development that has brought Dragonfly into the limelight is that the group has begun distributing the malware using the “watering hole” approach. Several ICS software vendor websites have been compromised, and their software installers available for download have been infected with Havex. It’s been reported that in one case, compromised software has been downloaded at least 250 times.
Since the sites belonged to notable vendors of programmable logic controllers used in managing wind turbines and other critical equipment, there could not be any other conclusion than “Russia is attacking our energy infrastructure”, right? Or could it?
Quite frankly, I fail to see any resemblance between Stuxnet and Dragonfly at all.
Stuxnet has been a highly targeted attack created specifically for one purpose: destroy Iranian nuclear enrichment industry. It contained modules developed specifically for a particular type of SCADA hardware. It has been so complex in its structure that experts are still not done analyzing it.
Dragonfly, on the other hand, is based on existing and widely used malware tools. It’s been targeting a wide array of different organizations – current reports show that it’s managed to compromise over 1000 companies. Also, the researchers that have discovered the operation could not find any traces of PLC-controlling payloads, the only purpose of the tool appears to be intelligence gathering. The claims of ties to the Russian secret services seem to be completely unsubstantiated as well.
So, does this all mean that there is not threat to our energy infrastructures after all? Of course, it does not! If anything, the whole Dragonfly story has again demonstrated the abysmal state of information security in the Industrial Control Systems around the world. Keep in mind, this time the cause of the attack wasn’t even weak security of an energy infrastructure. Protecting your website from hacking belongs to the basic “security hygiene” norms and does not require any specialized software, a traditional antivirus and firewall would do just fine. Unfortunately, even SCADA software vendors seem to share the relaxed approach towards security typical for the industry.
The fact that the Dragonfly case have been publicized so much is actually good news, even if not all publications are up to a good journalism standard. If this publicity leads to tighter regulations for ICS vendors and increases awareness of the risks among ICS end users, we all win at the end. Well, maybe except the hackers.
01.07.2014 by Alexei Balaganski
Office 365 is a popular cloud-based office productivity service built around Microsoft Office platform. Initially released in 2011, it has gone through a major upgrade in 2013 and is currently offered with different plans for home, small business, midsize and enterprise customers. Internally, Office 365 platform uses Microsoft Azure Active Directory for identity management and, with the exception of home and small business plans, offers three identity models for different user management scenarios. Recommended approach is to always start with the simplest model and transition to the more complicated one (or back) any time as requirements change. Let’s have a quick look at these models.
This is the simplest identity model and also the only one available for home and small business users. It’s typically used when an organization has no existing on-premise directory. In this case, user’s details are stored in the cloud directory only and can be managed using the standard Office 365 admin portal, which supports individual account management, as well as rudimentary batch processing using CSV files. Administrators can also reset user passwords or assign certain users (such as helpdesk staff) to perform this for other users. There is no way for a user to reset their own password.
Microsoft also provides a set of modules for Windows PowerShell to enable automation of common administration tasks. Another convenient option is using the Azure AD Graph API, which is a RESTful programming interface for developers to easily build applications integrating with Azure Active Directory.
The majority of organizations that already have an on-premise directory will definitely choose this model, which relies on several tools to synchronize existing user accounts with the cloud directory. Since Microsoft has introduced password hash sync in 2013, this model can also provide a “single sign-on” user experience with the same password on-premises and in the cloud without the complexity of identity federation.
Microsoft provides several different tools for directory synchronization, starting with the old and proven DirSync tool suitable for organizations that have a single Active Directory. In the simplest case DirSync can be installed directly on the domain controller and does not require any additional infrastructure.
Next-generation Azure AD Sync tool is being designed to replace DirSync with many new functions including support for other directories such as LDAP or SQL. More complex scenarios are possible using Microsoft Forefront Identity Manager and different connectors (currently this is still the only solution to synchronize with non-AD directories).
It should be noted that identity synchronization with Office 365 has several limitations. For example, synchronizing a single on-premise directory with several cloud tenants leads to multiple problems and is actively discouraged by Microsoft. Therefore, an organization having several Office 365 subscriptions should consider merging them into one using third party tools before setting up synchronization.
Since Synchronized Identity model covers the vast majority of use cases with significantly less administration effort, organizations should really consider deploying the federation infrastructure only for certain complex scenarios, for example:
- An ADFS infrastructure is already in place or there are existing third party identity providers;
- Special technical requirements, like smartcard authentication or support for password reset via Office 365 portal;
- Special policy requirements, like login auditing requirement or regulations prohibiting password synchronization.
Important note: one should not forget that federation still requires user accounts to be synchronized with the on-premise directory, so one should never jump directly to federated model without setting up synchronization first. Azure Active Directory currently supports multiple protocols for identity federation with Active Directory Federation Services 2.0 or other third party Security Token Services. The most recent addition to this list has been SAML 2.0, which was announced in March 2014.
Microsoft has established “Works with Office 365 – Identity program”, which is a qualification for third party identity providers for federation with Office 365. A list of qualified providers is maintained here.
Unfortunately, current versions of Office desktop applications have a major incompatibility with many third party identity providers, since they only support the so called active authentication, which can only be accomplished using WS-Trust protocol. Until an update is released later in 2014, Microsoft officially only supports federation with AD FS 2.0 or with qualified third party providers from the list above.
Microsoft has gone a long way since the initial release of Office 365. Current generation of the Azure Active Directory enables different identity models that support nearly all possible usage scenarios. While there are still several major interoperability issues the company has to solve, unless you have a really unusual on-premise identity environment, you should be covered by one of the options above.
This article was originally published in the KuppingerCole Analysts’ View Newsletter.
30.06.2014 by Alexei Balaganski
If you have attended our European Identity and Cloud Conference this May, you have probably noticed that, as opposed to the previous years, a significantly bigger part of the agenda and a substantial number of expo stands has been devoted to practical “down to earth” aspects of IT security. Multifactor authentication, encryption technologies, source code analysis, even backup – many of those topics have been previously looked down upon by strategists as boring tasks for IT engineers.
Well, times have changed. Explosive growth of computing power and networks, continued erosion of enterprise perimeters, development of more and more complicated Advanced Persistent Threats – all these trends are bringing good old Information Security back to the front pages. Before Snowden revelations, not many have given serious thought to encrypted communications. Before Heartbleed, not many people actually knew what “static code analysis” means.
There is however one topic that I personally consider extremely important, which has not received enough limelight in the recent years. This, of source, is Industrial Control System security, more often referred to as SCADA security.
In layman’s terms, SCADA (supervisory control and data acquisition) is a system for monitoring and controlling industrial processes of different kinds. Over decades, SCADA systems have evolved into large-scale systems operating complexes of equipment over large distances. SCADA systems are widely utilized in manufacturing, oil and gas refining, power generation and distribution, water treatment, and also for controlling facilities like heating and air conditioning in buildings or ships. In other words, SCADA systems control a significant part of every nation’s critical infrastructure, which makes them an important target for that nation’s enemies.
Unfortunately, SCADA systems have historically never been designed with security in mind. Early systems were monolithic physically isolated systems without any network connectivity. Later generations were based on proprietary LAN protocols that usually lacked any kind of transport security or authentication. Modern (or I should rather say “current”) SCADA systems have evolved into large-scale decentralized systems with increased number of network connections. They are gradually shifting from proprietary protocols to open standards and becoming increasingly interconnected with office networks and the Internet. Many workstations and human-machine interfaces (HMI) are actually standard Windows PCs often running outdated and unpatched software. Programmable Logic Controllers (PLC), the actual components controlling physical processes, are even more vulnerable, since their software and network protocols historically lack any security. Until recently, both SCADA vendors and enterprises deploying the systems gave little consideration to security issues, more or less relying on security by obscurity.
Discovery of Stuxnet malware in 2010 has shattered that false feeling of safety. A piece of software on a USB drive planted at the Iranian nuclear facility by US and Israeli intelligent services was able to disrupt the PLCs that controlled nuclear material enrichment centrifuges and ultimately physically destroy them. This case was widely publicized four years ago and naturally has led to establishment of standards and guidelines for prevention of such incidents both in the public and private sectors in many countries.
However, it somehow failed to grab the general public’s attention as much as Snowden and Heartbleed did later. Sure, the press regularly reports about new vulnerabilities found in different ICS systems, like this one (from last week!) or this. Check out my favorite quote:
The poor security of such software was revealed by a project Mr Rios and a colleague undertook in which they sought to find 100 Scada bugs in 100 days.
“We ended up finding over 1,000 bugs in 100 days,” he said. “Scada software security simply hasn’t kept up with modern times. The security of software like iTunes is much more robust than the software supporting our critical infrastructure.”
So, what makes SCADA systems so difficult to secure? Many reasons, actually, that require completely different approaches to address them.
- As I already mentioned above, current SCADA systems have evolved into distributed systems based on open network standards and commodity software, so they are theoretically vulnerable to the same attack vectors as other corporate networks. However, their design has historically never addressed security and identity issues at all.
- Although many components of SCADA systems run Windows, standard endpoint protection solutions are not particularly suitable for them, because even a minor latency spike caused by malware database update may lead to a disruption of the manufacturing process. Addressing process continuity in anti-malware software requires substantial changes in its logic.
- Traditional detection and blocking techniques are obviously not applicable for specialized systems like PLCs. Development of specialized solutions for their protection requires tight collaboration with PLC vendors. The same is true for integration with existing control and monitoring modules of SCADA systems.
- The newest trend in SCADA development is following the current trends in IT in general: growing adoption of Cloud services, introducing the “Internet of Things” approach to system design, etc. This leads to rapid growth of complexity, since number of modules and connections between them increases exponentially. Of course, this enables even more new attack vectors.
- Growing political tensions between both developed countries and global terrorist organizations mean that critical infrastructures will more likely to become targets of cyber-attacks with expected catastrophic outcomes.
Security experts have been talking about doomsday scenarios as a consequence of a possible attack on a power grid or a chemical plant. Luckily, until now there predictions have never materialized, but the poor state of ICS security still makes them the proverbial sword of Damocles hanging over our heads. Statistics clearly show that the number and the level of sophistication of attacks on ICS is steadily increasing.
Yes, until now we have not experienced a successful cyber-attack on a critical infrastructure that would lead to an industrial disaster with human casualties. But can we be sure that it won’t happen tomorrow? Not really. Luckily, both government organizations and security vendors are already working on different approaches to address this threat – both in short-term and long-term perspectives. And I firmly believe that EIC could be a good meeting place for these specialists to discuss this topic. Maybe, next year already?
If you’re looking for more information on this topic, check out these KuppingerCole’s published research documents:
26.05.2014 by Alexei Balaganski
Last Wednesday, eBay Inc. has announced that their user database has been compromised, and hackers were able to get away with “encrypted passwords and other non-financial data” of more than 145 million of eBay customers. eBay has informed us that financial information has not been affected and that they have not detected any increased fraudulent activity on their platform. Still, just in case, you should change your password and they are very sorry for this inconvenience.
Quite frankly, for any person working in the field of information security, this announcement raises a lot of inconvenient questions.
Apparently, the breach has occurred over two months ago, sometime in late February or early March. Yet, the official acknowledgement of the incident has only been made public last week. What took them so long? Does it mean that the hack went unnoticed for weeks if not months? In fact, both US and EU have security breach notification laws, and if the eBay case is not a direct violation of these laws, then in my opinion the laws have to be strengthened to avoid similar situations in the future.
It has been reported that the attackers managed to compromise employee log-in credentials, gain access to eBay corporate network and then proceed stealing customers’ emails, phone numbers, addresses, birthdates, and encrypted passwords. To me, this is a clear indication of a wildly inadequate security infrastructure or possibly of serious deficiencies of their service platform. The fact that eBay employees have complete access to their customer database strongly reminds of a similar case involving a certain intelligence agency and an idealistic system administrator. Apparently, eBay security team has never heard about Edward Snowden
Yet, what I find most disturbing is that by labeling this incident a mere inconvenience that only requires a password change as a precaution, eBay is actively downplaying the privacy-related implication of the hack. Hackers have managed to get away with enough personal information of millions of people from around the world to be able to use it for nearly any kind of cybercrimes on a massive scale: spamming, phishing, spreading malware, identity theft and so on. And, of course, if you’ve used the same credentials on another website, it will potentially be compromised as well.
Yet, even after a long chain of high-profile corporate security breaches (eBay, AOL, Target and, of course, the Heartbleed bug) general public still seems not to fully realize the extent of both security- and privacy-related consequences of these events. After hearing people saying something like “oh no, I have to come up with another strong password again” or “I already changed my password after reading about Heartbleed, isn’t it not enough?” I decided to try to make a list of measures every user has to take to protect themselves against past and future security breaches like eBay’s. Feel free to leave your suggestions in the comments if you believe I forgot something.
1. Think twice before giving an online service too much of your personal information. Does an obscure online game really need to know your birthdate or mother’s maiden name? A hacker might use this knowledge to impersonate you and get access to your online banking, for example. Life Management Platforms may be the future, but unfortunately we are not there yet, so protecting your personal information is still your personal responsibility.
2. Whenever possible, try to avoid using password authentication at all. Quite many online services already offer stronger alternatives to passwords, most often some kind of two-factor authentication. Google has their own 2-step verification platform, Facebook and Twitter support SMS-based verification codes, Dropbox even goes a step further and lets you choose from several different strong authentication methods. You’ll find a comprehensive list here, for example. Also look for buttons or logos of third-party strong authentication services like MYDIGIPASS, M-Pin or Duo Security. Surprisingly, eBay still doesn’t support strong authentication, yet its subsidiary PayPal does, and I strongly recommend starting using it ASAP.
3. Never, never, NEVER use the same password on different websites. Also never trust a password strength indicator on any website. The only truly strong password is a long randomly generated password, unique for each online service you’re using. And, by the way, never try to create a random password manually, humans are really bad at that. Use a specialized program or online service for that purpose.
4. Obviously, nobody can possibly remember all those complex unique passwords for many online services, but the worst mistake is to write it down and stick it to your monitor. Use a password management software instead. A modern password manager is more than just a secure encrypted storage for your passwords. It will offer many additional features like generating new secure passwords, automatically filling in login forms in browsers, storing secure notes and even warning you when a website you have an account on gets hacked and letting you change the password immediately. The most popular example seems to be LastPass and for a good reason. Besides offering all of the above for free, for a reasonable fee it provides access from mobile devices, a number of multifactor authentication methods, and other useful features.
5. When choosing a password manager, one has to take privacy implications into account as well. It’s not enough to protect your password vault from hackers, one has to consider the possibility that your entire list of passwords may be handed over to government authorities after a court order or simply land in one of NSA data centers. Therefore, always choose a solution that has a strong master encryption key that is only known to you. You may even opt for a standalone program like 1Password or KeePass and use third-party tools to synchronize its database, but this is less convenient.
6. Last but not the least: keep educating yourself about the latest developments in security software. Vote with your wallet for the developers that integrate privacy-enhancing measures into their products. Put pressure on your local lawmakers. After all, the future of information security depends on you as well.
09.05.2014 by Alexei Balaganski
A few days ago, while announcing their new Advanced Threat Protection initiative, Piero DePaoli, Symantec’s director of product marketing has made a provocative statement, proclaiming that ‘AV is dead’. His colleague Brian Dye said that antivirus software only catches around 45% of malware attacks, and that the company is shifting its focus towards responding to attacks instead of protecting against them.
Making such bold claims to promote new products or technologies is a common marketing tactic, we have even done something like that ourselves a couple of years ago, quite successfully. However, is there any substance behind this claim? Does it mean that, even armed with the modern IT security solutions, we’re still left unprotected from attackers and our only salvation is the future product from Symantec?
First, let’s clarify an simple terminology issue. A modern endpoint protection product is no longer just an antivirus. In fact, it would be safe to say that traditional signature-based antivirus programs (which first appeared over 20 years ago) are already dead for ages, since nobody makes traditional computer viruses (that is, self-replicating pieces of code that spread by embedding into other programs, boot sectors or data files) anymore.
Modern attacks against IT security have evolved into Advanced Persistent Threats, which are complex combinations of different attack vectors, including infected media, network exploits, software vulnerability attacks and social engineering. “Traditional” malware, such as Trojan programs and worms, still plays a central role in those attacks, however.
Modern IT security solutions have obviously evolved as well. Even ordinary users using a modern consumer antivirus program know very well that it includes not just a malware detection engine, but a firewall for protection from network attacks, some form of application control to stop Trojans, device control to prevent data leaks and so on. They also rely on cloud-based reputation services for application black- or whitelisting. We simply keep calling this kind of software an “antivirus”, just as we still call those powerful little computers in our pockets “phones”.
Yes, an antivirus alone is not capable of protecting against modern security threats. The only viable approach for developing efficient IT security is a multi-layered design combining endpoint protection, firewalls (although these are becoming less important since modern IT no longer has a rigid perimeter), database and application security, identity management and information rights management. Security experts have been talking about it years ago. And for years, security vendors have been working on developing more sophisticated, more versatile, more integrated solutions to fight those threats.
The latest trend in this evolution is the so-called Real-time Security Intelligence, where security solutions are becoming a mix of software and services, relying heavily on big data analytics and external sources of real-time security information, such as zero day attacks. For more information have a look at this blog post. The topic will also be prominently featured at the EIC 2014, and there is an in-depth report on it in the works.
As advanced persistent threats become more advanced and persistent coordinated, another aspect of a security application suite becomes more important: it’s no longer enough to offer protection against different attack vectors separately. A more integrated solution with tighter coupling between different modules and with centralized management and monitoring will necessarily provide more reliable detection and protection against those threats. In this regard, Symantec is actually lagging behind many other vendors that already offer technologies like sandboxing or reputation analysis, and in better-integrated packages. A notable example here would be Kaspersky Lab, which offers a single-vendor solution with the level of integration nearly impossible to achieve by technology acquisitions or partnerships.
So, is Antivirus really dead? Yes, and it’s been buried many times in the past.
Should we worry about it? Not really, since it keeps resurrecting with new technologies and functions, while somehow still keeping its familiar name. So, don’t be fooled by bold marketing claims, but look for multi-layered and tightly-coupled security solutions, they are still relevant and won’t go away any time soon.