The 12 Days of Cyber Christmas

December 19, 2014

As 2014 draws to a close, we will not make any predictions for 2015. We’ll leave that to astrologers and TimeLords (and we don’t believe in either!). What we can do, however, is learn from the past. As an industry, we haven’t been good at educating others outside our industry of the key things they should be doing, yet the same basic security measures ,either poorly implemented or missing completely, pop up time and again. So here is our take on the 12 days of Christmas: The 12 days of Cyber Christmas: 12 essential security measures organisations should be taking, both large and small.

Cyber Tree, 12 days of Christmas, Cyber Security, Patridge in a Cyber Tree

On the 12th day of Christmas,
Cyber Security sent to me:
12 remotes removing
11 policies persuading
10 defaults denying
9 apps a patching
8 coders coding
7 keys safeguarding
6 risks a weighing
5 unsolicited pings
4 complex words
3 Varied Pens
2 console dumps
And No View access to SNMP

On the 12st day of Christmas, Cyber Security sent to me: 12 remotes removing

It’s truly shocking how many accounts have remote access to resources which don’t require it. Think at about this from both a technical and “need to know” business case perspective. On the technical side, do you really need Remote Desktop (in Windows) via RDP. If you conclude you really do, make sure to remove the local admins account from RDP and have specific user accounts with strong passphrases (see the 4th Day of Christmas). In the United States Senate Report on the 2013 Target breach, published earlier this year, on of their key findings was that Target had given remote access to a third party vendor which did not appear to be following information security practices.

11 policies persuading

As we reported in September, organisations large and small need to work in security policies, user training and quarterly security KPIs, monitored by their HR teams. If you are looking for two areas to start then physical security and social media usage should be two good ones to start with in early 2015. It’s simply too easy to get into some organisations on the physical side; whilst we’ve seen numberous politicians and would-be politicians forced to resign following faux-pas on social media in recent months.

10 defaults denying

The constant revelations from Edward Snowden in 2014 remind us all of the dangers of giving too much access to data to one individual in the organisation. According to the Verizon 2014 Data Breach investigations report, 88% of security incidents were caused by insider misuse, either accidental or malicious. Review your access permissions to data too, and ensure you have appropriate technical controls in place to both grant access to data to only appropriate personel, and ways of auditing and tracking access to data. I’ve really been impressed with Vormetric’s Data Security Platform which specifically addresses this issue, using file level encryption and key manangement.

9 apps a patching

According to Secunia’s Vulnerability Review 2014, “the more widespread a program is, and the higher the unpatched share, the more lucrative it is for a hacker to target this program, as it will allow the hacker to compromise a lot of victims”. This is reflected in their research, where Microsoft IE (99% of market share) had 270 known vulnerabilities (at their time of publication); whereas Apple Safari (11% market share) had 75 known vulnerabilities. They also discovered 33% of users running IE were using unpatched browsers; and 14% of those users running Apple’s Safari browser was unpatched. Ensure you are patching all your applications – in-house and third-party regularly for 2015!

8 Coders coding

If you are developing any applications in-house applications, be sure to encourage secure coding best practices. A wealth of information is available on this subject, the Microsoft Security Development Lifecycle which is a good starting point.

7 Keys Safeguarded

In the eBay breach announced on 21st May 2014, hackers stole personal information of 145 million active users of eBay, after they compromised the database containing encrypted passwords. This suggests that their wasn’t proper key management in place to safeguard the data. Remember never to give user access to keys that isn’t required. Encryption of data, both in-house and cloud, along with key management is handled very efficiently with Vormetric’s Data Security Platform.

6 Risks a weighing

In any SME to large organisation, your organisation should be regularly reviewing the risk. In order to determine where your resources are spent mitigating the risk, you need to look at the costs of the assets you are protecting, the costs of the risks of a breach, the legislation you must be compliant with and decide where best to spend your resources. A good starting point for risk management framework is ISO 31000.

5 Unsolicited Pings

When was the last time you did a network/ asset discovery exercise on the network? If you deal with credit cards data you have a requirement to do a quarterly network vulnerability scan. Can you account for all the devices discovered on your network?

4 Complex Words

If there’s one message we need to get over to end-users in eCommerce in 2015, it’s the avoidance of using default, vendor-supplied passwords, and weak passwords. As the ICO warned us this November, with a Russian website providing live footage from thousands of webcams across the UK, we should never use vendor-supplied passwords on our devices. Make sure all your devices on the network are strong passphrases, comprising of not just word, but phrases, containing numbers and special characters. An example would be: MerryChr1stmas2You!   Remember also to use different phrases per device/ logon account.

3 Varied Pens

It’s good security practice to change your penetration tester regularly: the black art of ethical hacking is partly creative and intuitive: what one pen tester finds, another may not necessarily. People do have their favourite methods and utilities, and overtime, if you keep re-using the same penetration testers, you may find you get the same results. Change your penetration testers regularly so that no stone is unturned looking for vulnerabilities. In addition, as well as having a human element, mid-tier to large organisations should also invest in some automated threat analysis. This year we’ve been very impressed with the offering from Alien Vault, with their large OTX database and threat analysis.

2 Console Dumps

One of the key findings in the 2014 Senate Report on the 2013 Target breach, affecting 110m credit card numbers; was their failing to respond to multiple warnings from the company’s anti-intrusion software. Ensure your SIEM is properly configured, and that you have the right resources to monitor these logs in real-time and on a proactive basis.

And no View Access to SNMP

SNMP (Simple Network Management Protocol) is used in systems management to monitor devices such as routers, switches, and printers. On your systems management consoles, it enables the agents to report the status of IP devices. Unfortunately, a large number of these SNMP v1/2 configurations allow each user to view an entire, unrestricted view of the entire agent tree, which could then be used to provide sufficient credentials to obtain read-write or admin level SNMP access.

Wishing all our readers a Merry Christmas and Happy & Prosperous 2015.


What Price, Privacy?

October 15, 2014

At IP Expo last week in London, the inventor of the World Wide Web, Tim Berners-Lee, outlined his vision for the future of the web, and data ownership in his keynote speech.

internet, web commerce, ecommerce, internet, future, connectivity, global connectivity, data sharing, data

© Victoria – Fotolia.com

Over the past few years there have been numerous concerns about big data and its use. Tim Berners-Lee argues that whilst he is perfectly happy for his personal data to be used by professionals where it is of benefit to him or mankind (e.g. healthcare professionals in a road traffic accident), this does not apply to commercial organisations for the purposes of targeted marketing. Tim argues that the owners of personal data are the data subject, not large corporations, and that we should have a say how it can be used and where it can be sold or merged with other data sets.

Others take a different view. An interesting panel discussion with Raj Samani, Andrew Rose & Josh Pennell ensued later in the conference. There was disagreement amongst these panelists whether the ‘reputational damage’ argument regarding data loss actually meant anything these days to organisations, since data breaches are seemingly ubiquitous and those outside the industry, especially the Facebook generation, simply don’t care if their personal data is lost. Time will tell if there is long-term damage to Target, for example from their breach, although early indications appear to show that the share price of Target has mostly recovered.

Earlier in the month, Raj Samani had argued in a webcast to Forbes, that as a consumer, he gives consent for his personal data being used and shared when he signs up for services, and that most consumers will happily do so, given there is consent, perceived value and transparency. After, all, services such as Twitter and Facebook are free for the end-user to use. Raj does concede however, that the terms and conditions are rarely clear, being large legal documents, and that few end users will actually read them. There rarely is effective transparency in these cases, if end-users do not realise what they are signing up to.

How Tim’s proposal might work in practice would be to change legislation and afford personal data the same status as copyrighted data: use and context requires specific owner consent and re-use requires a royalty paid to the data subject. It may also solve the forthcoming conundrum about the “right to erasure” in the new EU Data Protection legislation: if I ask to be removed from one database, in theory the deletion should cascade into third party databases, where the data had been sold on. It would also take away some of the burden from over-worked regulators like the ICO, who are severely under-resourced.

I’m sure many organisations will say such a model is unworkable, but it may just make organisations think about how and where they use our personal data, especially if over-use of personal data directly affected bottom line via royalty payments. 40 years ago, in the Cold War era, if you had suggested an interconnected network of disparate computers upon which banking, finance, government, education, retail and leisure all relied on, which would become both ubiquitous and intrinsic to our daily lives, the idea would probably have been dismissed as the work of science-fiction and unworkable. Yet in 2014, we rely upon the internet to sustain all these things and more. Our industry needs such radical thinking today for a new model of data protection.

Phil Stewart is Director, Excelgate Consulting


WikiLeaks: The Law Lags Behind the Technology

January 31, 2011

One of the challenges the US government faces is to successfully try someone using the Espionage Act of 1917. There have been few successful convictions in such cases: though not none as is sometimes erroneously reported. In 1919, the US Supreme Court ruled that the Espionage Act did not contravene the First Amendment to the US Constitution (which prohibits the making of any law which infringes freedom of the press or freedom of speech, amongst other things).

"Top Secret"

The leaking of classified material into the public domain is clearly an offence under the Espionage Act (and ways to prevent this from a technology and process perspective were covered in Excelgate’s previous article WikiLeaks: Is There a Silver Bullet?). The law clearly was never designed for the Internet age. There remain loopholes in the law, concerning the re-publication of already leaked material. This has been recognised in the US Congress: with Senators proposing a new SHIELD Act which will specifically target those publishing already leaked material. Amongst the aims of the new Act is to “make it illegal to publish the names of U.S. military and intelligence informants.” The debate now rages in the US as to whether or not such an Act would curtail freedom of the press and thus be deemed unconstitutional.

In the UK things are more clear-cut concerning the issue of publication of already leaked classified material. The Official Secrets Act was revised in 1989, repealing the “public interest” defence from the former act of 1911. In addition, in section 5, it makes for provisions that potentially allow for prosecution of newspapers or journalists who publish secret information leaked to them by a Crown servant or government contractor, in contravention of any section of the act. This would include (under section 3) the disclosure of information concerning international relations.  This does however, only apply to UK citizens, or if the disclosure takes place in the UK, the Channel Islands or its overseas territories.

The UK is not immune, however, from laws which lag behind the realities of the technology of the 21st Century. Earlier this month, the UK Deputy Prime Minister, Nick Clegg, discussed changes to the UK’s libel laws, which currently work against scientists and others, who publish their findings online. There have been cases brought against UK citizens by foreign companies whereby the publishers’ ISPs are effectively told to remove the content and thus act as an intermediate judge and jury. The current UK libel laws have attracted much attention, with more than 50,000 backing the Libel Reform campaign. The US also passed a bill protecting its citizens from these laws, by decreeing that foreign libel laws are not enforceable in the US.

Once again the issue here is that technology has evolved so quickly that the law has not kept up. When the libel laws were drafted in the UK, written publications tended to be well-thought through, drafted and checked by their publishers and legal team prior to publication. Publications were limited to the national media: print, then radio, then  television. Very rarely did John Smith’s opinions appear in print, and if it did it would have been checked by a team of editors.

In 1996, it was estimated that there were 30 million Web pages, located on 275,000 servers, as indexed by the Alta Vista search engine. In 2011, it is estimated that the World Wide Web now consists of 13.03 billion web pages, as of January, 2011. Prior to the World Wide Web, it was straightforward what constituted a reasonable case for defamation: a false statement made which damaged reputation. It was easy to prosecute too: no supporting evidence to back that claim. Discussions between scientists were unlikely to surface into the public domain, without prior checking by their publisher. What remained as a private discussion between two scientists remained so. With the proliferation of online blogging, this approach to publishing no longer occurs, and in some cases the current laws are abused. Emails too are also in some ways a dangerous form of communication: they tend to be the 21st century equivalent of a previously spoken discussion – sometimes flippant, and for some people, tend to reflect what they only would have  spoken prior to the Internet. However, emails can be forwarded:  opening the way for libel action.

Digital World

In 1996 it was estimated there were 30 million web pages globally on the web. This has grown to over 13 billion by 2011: nearly 2 pages for every person on earth!
Image: © Nmedia - Fotolia.com

As this article is being written, it is being reported that 5 men in the UK have been arrested in connection with allegedly participating in the Anonymous’ Denial of Service (DoS) attacks which were mounted against PayPal, Amazon and MasterCard, in support of WikiLeaks. In a DoS attack, a hacker, or group of hackers aim to bring down a website or service by flooding that site with traffic, so that the site is longer able to respond. Such attacks are illegal under both US and UK Law. In the UK, the Police and Justice Act of 2006 makes it illegal to engage in a DoS attack, with a maximum sentence of up to 10 years imprisonment. Part of the reason for that Act was that it was felt at that time that the Computer Misuse Act of 1990 had potentially a loophole which did not cover DoS attacks. Such attacks in the US are covered under the Computer Abuse and Fraud Act.

The interconnectivity of the Internet and the WikiLeaks issue both illustrate that it is impossible for one jurisdiction acting alone to deal with what is a global problem. As with the financial crisis, co-ordinated action worldwide is needed across governments and geographies to deal with this problem. Those who seek to engage in criminal damage to computer systems should feel the force of the law, irrespective of where the attack took place from. Perhaps 2011 is the year where governments worldwide need to think about how their current legislation reflects data publication and Internet usage and abuse? Policy, process, and procedures are all very well, but the law also needs to reflect the realities of the 21st century. As with the birth of the international electronic communications in the form of telegraphy, only when there is a global consensus will there be a way forward.

Phil Stewart is director of Excelgate Consulting and a member of the management team of the UK Chapter of the ISSA.