The 12 Days of Cyber Christmas

December 19, 2014

As 2014 draws to a close, we will not make any predictions for 2015. We’ll leave that to astrologers and TimeLords (and we don’t believe in either!). What we can do, however, is learn from the past. As an industry, we haven’t been good at educating others outside our industry of the key things they should be doing, yet the same basic security measures ,either poorly implemented or missing completely, pop up time and again. So here is our take on the 12 days of Christmas: The 12 days of Cyber Christmas: 12 essential security measures organisations should be taking, both large and small.

Cyber Tree, 12 days of Christmas, Cyber Security, Patridge in a Cyber Tree

On the 12th day of Christmas,
Cyber Security sent to me:
12 remotes removing
11 policies persuading
10 defaults denying
9 apps a patching
8 coders coding
7 keys safeguarding
6 risks a weighing
5 unsolicited pings
4 complex words
3 Varied Pens
2 console dumps
And No View access to SNMP

On the 12st day of Christmas, Cyber Security sent to me: 12 remotes removing

It’s truly shocking how many accounts have remote access to resources which don’t require it. Think at about this from both a technical and “need to know” business case perspective. On the technical side, do you really need Remote Desktop (in Windows) via RDP. If you conclude you really do, make sure to remove the local admins account from RDP and have specific user accounts with strong passphrases (see the 4th Day of Christmas). In the United States Senate Report on the 2013 Target breach, published earlier this year, on of their key findings was that Target had given remote access to a third party vendor which did not appear to be following information security practices.

11 policies persuading

As we reported in September, organisations large and small need to work in security policies, user training and quarterly security KPIs, monitored by their HR teams. If you are looking for two areas to start then physical security and social media usage should be two good ones to start with in early 2015. It’s simply too easy to get into some organisations on the physical side; whilst we’ve seen numberous politicians and would-be politicians forced to resign following faux-pas on social media in recent months.

10 defaults denying

The constant revelations from Edward Snowden in 2014 remind us all of the dangers of giving too much access to data to one individual in the organisation. According to the Verizon 2014 Data Breach investigations report, 88% of security incidents were caused by insider misuse, either accidental or malicious. Review your access permissions to data too, and ensure you have appropriate technical controls in place to both grant access to data to only appropriate personel, and ways of auditing and tracking access to data. I’ve really been impressed with Vormetric’s Data Security Platform which specifically addresses this issue, using file level encryption and key manangement.

9 apps a patching

According to Secunia’s Vulnerability Review 2014, “the more widespread a program is, and the higher the unpatched share, the more lucrative it is for a hacker to target this program, as it will allow the hacker to compromise a lot of victims”. This is reflected in their research, where Microsoft IE (99% of market share) had 270 known vulnerabilities (at their time of publication); whereas Apple Safari (11% market share) had 75 known vulnerabilities. They also discovered 33% of users running IE were using unpatched browsers; and 14% of those users running Apple’s Safari browser was unpatched. Ensure you are patching all your applications – in-house and third-party regularly for 2015!

8 Coders coding

If you are developing any applications in-house applications, be sure to encourage secure coding best practices. A wealth of information is available on this subject, the Microsoft Security Development Lifecycle which is a good starting point.

7 Keys Safeguarded

In the eBay breach announced on 21st May 2014, hackers stole personal information of 145 million active users of eBay, after they compromised the database containing encrypted passwords. This suggests that their wasn’t proper key management in place to safeguard the data. Remember never to give user access to keys that isn’t required. Encryption of data, both in-house and cloud, along with key management is handled very efficiently with Vormetric’s Data Security Platform.

6 Risks a weighing

In any SME to large organisation, your organisation should be regularly reviewing the risk. In order to determine where your resources are spent mitigating the risk, you need to look at the costs of the assets you are protecting, the costs of the risks of a breach, the legislation you must be compliant with and decide where best to spend your resources. A good starting point for risk management framework is ISO 31000.

5 Unsolicited Pings

When was the last time you did a network/ asset discovery exercise on the network? If you deal with credit cards data you have a requirement to do a quarterly network vulnerability scan. Can you account for all the devices discovered on your network?

4 Complex Words

If there’s one message we need to get over to end-users in eCommerce in 2015, it’s the avoidance of using default, vendor-supplied passwords, and weak passwords. As the ICO warned us this November, with a Russian website providing live footage from thousands of webcams across the UK, we should never use vendor-supplied passwords on our devices. Make sure all your devices on the network are strong passphrases, comprising of not just word, but phrases, containing numbers and special characters. An example would be: MerryChr1stmas2You!   Remember also to use different phrases per device/ logon account.

3 Varied Pens

It’s good security practice to change your penetration tester regularly: the black art of ethical hacking is partly creative and intuitive: what one pen tester finds, another may not necessarily. People do have their favourite methods and utilities, and overtime, if you keep re-using the same penetration testers, you may find you get the same results. Change your penetration testers regularly so that no stone is unturned looking for vulnerabilities. In addition, as well as having a human element, mid-tier to large organisations should also invest in some automated threat analysis. This year we’ve been very impressed with the offering from Alien Vault, with their large OTX database and threat analysis.

2 Console Dumps

One of the key findings in the 2014 Senate Report on the 2013 Target breach, affecting 110m credit card numbers; was their failing to respond to multiple warnings from the company’s anti-intrusion software. Ensure your SIEM is properly configured, and that you have the right resources to monitor these logs in real-time and on a proactive basis.

And no View Access to SNMP

SNMP (Simple Network Management Protocol) is used in systems management to monitor devices such as routers, switches, and printers. On your systems management consoles, it enables the agents to report the status of IP devices. Unfortunately, a large number of these SNMP v1/2 configurations allow each user to view an entire, unrestricted view of the entire agent tree, which could then be used to provide sufficient credentials to obtain read-write or admin level SNMP access.

Wishing all our readers a Merry Christmas and Happy & Prosperous 2015.


What Price, Privacy?

October 15, 2014

At IP Expo last week in London, the inventor of the World Wide Web, Tim Berners-Lee, outlined his vision for the future of the web, and data ownership in his keynote speech.

internet, web commerce, ecommerce, internet, future, connectivity, global connectivity, data sharing, data

© Victoria – Fotolia.com

Over the past few years there have been numerous concerns about big data and its use. Tim Berners-Lee argues that whilst he is perfectly happy for his personal data to be used by professionals where it is of benefit to him or mankind (e.g. healthcare professionals in a road traffic accident), this does not apply to commercial organisations for the purposes of targeted marketing. Tim argues that the owners of personal data are the data subject, not large corporations, and that we should have a say how it can be used and where it can be sold or merged with other data sets.

Others take a different view. An interesting panel discussion with Raj Samani, Andrew Rose & Josh Pennell ensued later in the conference. There was disagreement amongst these panelists whether the ‘reputational damage’ argument regarding data loss actually meant anything these days to organisations, since data breaches are seemingly ubiquitous and those outside the industry, especially the Facebook generation, simply don’t care if their personal data is lost. Time will tell if there is long-term damage to Target, for example from their breach, although early indications appear to show that the share price of Target has mostly recovered.

Earlier in the month, Raj Samani had argued in a webcast to Forbes, that as a consumer, he gives consent for his personal data being used and shared when he signs up for services, and that most consumers will happily do so, given there is consent, perceived value and transparency. After, all, services such as Twitter and Facebook are free for the end-user to use. Raj does concede however, that the terms and conditions are rarely clear, being large legal documents, and that few end users will actually read them. There rarely is effective transparency in these cases, if end-users do not realise what they are signing up to.

How Tim’s proposal might work in practice would be to change legislation and afford personal data the same status as copyrighted data: use and context requires specific owner consent and re-use requires a royalty paid to the data subject. It may also solve the forthcoming conundrum about the “right to erasure” in the new EU Data Protection legislation: if I ask to be removed from one database, in theory the deletion should cascade into third party databases, where the data had been sold on. It would also take away some of the burden from over-worked regulators like the ICO, who are severely under-resourced.

I’m sure many organisations will say such a model is unworkable, but it may just make organisations think about how and where they use our personal data, especially if over-use of personal data directly affected bottom line via royalty payments. 40 years ago, in the Cold War era, if you had suggested an interconnected network of disparate computers upon which banking, finance, government, education, retail and leisure all relied on, which would become both ubiquitous and intrinsic to our daily lives, the idea would probably have been dismissed as the work of science-fiction and unworkable. Yet in 2014, we rely upon the internet to sustain all these things and more. Our industry needs such radical thinking today for a new model of data protection.

Phil Stewart is Director, Excelgate Consulting


Compliant But Not Necessarily Secure? How HR Can Help..

September 25, 2014

As the recent Target breach has shown: being compliant alone does not necessarily mean inherently secure. As a recent analysis of the theft of personal and financial information of 110 million Target customers by the United States Senate illustrates, key warning signals that Target’s network was being infiltrated were missed in the organisation. Both anti-malware and SIEM identified suspicious behaviour on their network, yet two days later the attackers began exporting customer data from their network.

HR - business functions, business operations, business processes

© Rawpixel – Fotolia.com

The fact that Target was certified as being compliant with the PCI DSS standard just two months before the breach took place has certainly caused a lot of debate in the industry. I’ve always argued that simply persuing an annual ‘checkbox exercise’ is not enough: there must be a lasting and on-going cultural awareness with regards to data security.

I’ve also long argued that whilst responsibility and drive for data security lie at board level; the entire business needs to be both aware and on-board too. That’s why HR is vital to achieving the role. KPIs for data security, along with employee training programmes need to enhance the goals of the CISO. Data security KPIs should be SMART: Specific, Measurable, Achievable, Realistic and Time-orientated. Across all business units there should be at least annual (and perhaps quarterly, depending upon the nature of the business) employee awareness training programmes.

The KPIs should be pertinent to the area of the business unit they operate in and relevant to the data security standard your organisation is working towards gaining (or maintaining) being compliant with. For PCI DSS, for example, the requirement to ‘Protect Cardholder Data’ clearly has different implications for different business units. For a call centre, for example the requirement should be around not storing PANS in a readable format; not recording CV2 numbers when taken over the phone; not writing cardholder details down on bits of paper for subsequent entry; and the secure physical storage of paper records. For development, this would pertain to: encryption methods used and key storage and protection; how PANs are rendered unreadable electronically and adherence to the standard; how cardholder data is transmitted across networks. For IT support, these could relate to maintaining up to date anti-malware and intrusion prevention systems; hourly monitoring of SIEM information; weekly reports to senior management concerning monitoring status and patch levels.

Whilst line managers are usually responsible for setting KPIs in a business, I strongly believe HR can make a valuable contribution: by enforcing company policy in ensuring all line managers include data security quarterly KPIs in their target setting. A function of HR in a business is to safeguard the reputation of a business, something which is also the function of an information security professional.

A recent study by insurance firm Beazley analysed more than 1,500 data breaches it services between 2013 and 2014, and discovered that employee errors accounted for 55% of them. HR is also about people management and defining the culture of an organisation. Whilst I’m not suggesting that HR take on responsibility for information security, they certainly have a part to play in ensuring that the correct mindset operates across all parts of the business. Changing an organisation’s culture requires a sustained effort from numerous players across the business: and HR is one of the key ones.

Phil Stewart is director of Excelgate Consulting.


Bring Your Own Pandora’s Box

May 1, 2012

Bring Your Own Device (BYOD) seems to spreading in the industry as a debating point in the industry like a virulent virus. Is it fast becoming the 21st century version of the “Pandora’s Box” for the workplace?

Bring Your Own Device - BYOD = Bring Your Own Pandora's Box

In ancient Greek mythology the “Pandora’s box” was a box which contained all the evils of the world. Compelled by her own curiousity, Pandora opened it, and all the evil contained therein escaped and spread all over the world. Is “Bring Your Own Device” (BYOD) the 21st Century equivalent?
Image: © foaloce - Fotolia.com

I’ve long been an advocate of information security professionals adopting a more flexible approach when giving end-users the functionality and flexibility they want: a happier user is a more productive user. Indeed, a recent study published by O2 last month [1], the largest ever flexible working pilot of its kind in the UK, found that workers were more productive, happier and richer if they did not go into the office.

Users want to experience the functionality, applications and interfaces that they use and are familiar with in their private lives. They want devices that allow them to work remotely from outside the office and on the move. All things can be taken too far, however; and this is exactly what is happening with BYOD. It seems that some organisations appear intent in throwing out the corporate security (baby) out with their data (treating it as bathwater).

BYOD should be treated with the same three basic principles as with any other information security issue: confidentiality, integrity and availability. Vast sums were spent on secure laptop rollouts in the noughties in order to give workers the ability to work from home and on the road securely. Did we ask/ allow users to bring their own home laptops (along with their malware!) and just plug them into the corporate network and hope for the best? Of course we didn’t!

Some good reasons why BYOD may not be such a great idea:

  • What happens if the device is lost or stolen from the end user? Along with the device goes (unencrypted) corporate data and contacts and an inability to remote wipe/ kill it
  • Removable media is easily targeted by would-be remote hackers as an easy way of bypassing network firewalls: it the assumed route for example for the Stuxnet virus for example.
  • Smartphones usually nowadays have both voice recording and high definition camera capabilities: easily allowing for industrial espionage. There’s good reason most high-security organisations don’t allow them on their premises.
  • There are legal ramifications in most geographies in monitoring employee’s personal web behaviour.
  • Laws on employee monitoring of personal data/ behaviour varies from geography to geography: this is a potential headache for corporate risk departments.
  • How will you monitor the mobile devices for malware, and prevent its introduction into the corporate network?
  • How will you identify which device is ‘rogue’ and which is ‘approved’ in a true BYOD environment?
  • How do you monitor for acceptable use on an employee-owned device? (assuming there are no legal blockers)
  • How will you prevent the employee walking out the door with your customer and supplier contact lists?
  • How do prevent illicit/unlicensed content from entering the corporate network?
  • Which devices are deemed to be supported and unsupported?
  • Does your IT department have the right skills to support cross-platform mobile applications in a secure fashion?

The more enlightened organisations are of course doing a sensible compromise: rolling out well-known tablets to their workforce in a controlled manner, whilst not allowing private devices to access the corporate network. According to Apple, 9 months after they launched the iPad, 80% of Fortune 100 companies had either deployed or were piloting the device. [2]. When deploying mobile devices you should consider all the areas you would have done for a laptop device and more:

  • How will the data on the device be encrypted?
  • How will you transmit data encrypted to your internal network and external service providers?
  • How will you ‘remote kill’ the device should it be lost or stolen?
  • How will you prevent malware and inappropriate content entering the network.?
  • How will the user authenticate securely onto both the device itself and also the corporate network?
  • How will you identify vulnerabilities on the mobile Operating System?

It is perhaps my last question which causes me the most concern, since some in the industry appear to think that certain mobile platforms are not open to the vulnerabilities and exploits in the same way as their desktop and server counterparts. A recent mystery shopper exercise has confirmed my opinion: most do not take the threat posed by mobile devices as seriously as they should. The SANS Institute think otherwise: a whitepaper published them by in 2011 [3] stated that whilst Apple has done a great job by allowing only digitally signed software to be installed on a non jail-broken iPhone; there were still vulnerabilities that could be exploited, particularly in the web-browser (Safari).  Recently this year Reuters reported on a flaw on the operating system of the Android operating systems which allowed hackers to eavesdrop on phone calls or monitor the location of the device. [4]

In order to reduce the risk posed by BYOD, organisations should develop an acceptable usage policy for tablets which clearly identifies the steps end-users should take to complement the security that has been built-in to the device by the IT department. This should include sensible advice on the dangers of allow shoulder-surfing when logging on in a public area; ensuring the device is not left in a public place; regular network connectivity to the corporate network to allow regular updates.

The sensible compromise of rolling out familiar, but secured, tablets in the workplace combined with an acceptable usage policy, will prevent your employees seeing the management as “tone deaf” and may well realise significant increases to both end-user productivity and creativity in the workplace; whilst maintaining a sensible approach to safeguarding against the unguarded use of mobile devices in the workplace.

Phil Stewart is Director, Excelgate Consulting & Secretary & Director, Communications, ISSA UK

Sources:

[1] Working From Home More Productive, Telegraph 3rd April 2012:
http://www.telegraph.co.uk/technology/news/9182464/Working-from-home-more-productive.html

[2] Morgan Stanley: Tablet Demand & Disruption:
http://www.morganstanley.com/views/perspectives/tablets_demand.pdf

[3] SANS Institute: Security Implications of iOS:
http://www.sans.org/reading_room/whitepapers/pda/security-implications-ios_33724

[4] Android Bug Opens Devices to Outside Control, Reuters, 24th February 2012
http://www.reuters.com/article/2012/02/24/us-google-android-security-idUSTRE81N1T120120224


2011: The Year that News Made the News

December 15, 2011

2011: A year that started with the continued aftermath of the WikiLeaks saga, and ended with the Leveson Inquiry investigating the phone hacking scandal in the UK. Along the way many big names saw data breaches in 2011 including Citibank, Epsilon Marketing, RSA, Google, and Sony.

News in the News

2011 was the year that the news made the news. We started the year talking about classified information finding its way into the press and ended the year with an inquiry into the press. Alleged offences under section 55 of the Data Protection Act are at the heart of the phone hacking scandal.
Image: © Claudia Paulussen - Fotolia.com

Back in January Excelgate reported that following the ongoing WikiLeaks saga, it was necessary to have a coordinated global response across governments if a repeat of such a massive leak of classified material was to be avoided.  The industry is just coming to terms with the ramifications of WikiLeaks, and at the East West Institute in London in June, it was recognised that there needs to be greater co-ordination in the future, not just between technical and legal practitioners and politicians, but also across geographies in drafting new data protection and privacy legislation in the future.

2011 saw the launch of some new standards and assurance schemes in the UK. In March, ISSA UK launched ISSA 5173 – a new security standard aimed at improving information security for small and medium-sized enterprises. I’ve been involved in the formation of the standard since the foundation of the workgroup back in 2010, and in the collation of feedback since the publication of the draft standard in March. To date feedback has been overwhelming positive, as well as being well received by the IT press. 2012 will see the publication of a series of guidance documents to accompany the standard. 2011 also saw the launch of CESG’s Commerical Product Assurance, the replacement product assurance scheme for CCTM for security solutions in the UK public sector. It aims at providing two levels of assurance for security solutions depending on where the proposed solution is to be used. This should open up the market for the relatively closed space of IL3 (for example local authorities) by encouraging competition and innovation in this space.

August saw the rioting across London and other parts of the UK. Social media was widely reported to have been used by some rioters in coordinating some of the riots. As reported in June, the misuse of social media comes in a number of forms: from criminal acts to damaging reputations – both corporate and private. In November, speaking at the London Conference on Cyberspace, The Foreign Secretary, William Hague, whilst warning against the dangers of government censorship with regards to the use of social media, did state that global, coordinated action is needed to deal with social media misuse and cyber attacks. In short he said “that behaviour that is unacceptable offline is also unacceptable online, whether it is carried out by individuals or by governments.”

The Levenson Inquiry, investigating the culture, practice and ethics of the press started its formal evidence hearings in November. At the heart of the phone hacking scandal is that somewhere along the chain personal data was potentially procured illegally. The What Price Privacy? report of 2006, published by the Information Comissioner’s Office already highlighted there was widespread abuse of the Data Protection Act in the UK and recommended a custodial sentence for Section 55 offences. As reported in my interview with Christopher Graham in August, the Information Commissioner will continue to push for custodial sentences for the most serious offences.

The message of the What Price Privacy?  report has somehow become lost and focused on journalists rather than the illegal trade in personal data. I certainly don’t want to see restrictions placed upon a free press: this is the foundation of any democracy. Wrong-doing and fraud should be exposed. There is already a large carve-out of exceptions in the Data Protection Act for journalistic purposes. The trade and procurement of personal data is already illegal, however, and in my view the law is not being enforced severely enough – either in severity of sentence for the most serious cases, or in the number of prosecutions.

I have been impressed with the ICO’s response to date with regards to education and the issuing of fines. They have taken a very reasonable position: that people need education as to how to comply with the Data Protection Act. It would indeed be foolish to adopt a mass fining policy of organisations when the powers of the ICO to issue a monetary penalty notice only came into effect in April 2010. On the other hand, April 2012 will mark the second anniversary of this and to continue indefinitely in this mode would be, in my opinion, be a mistake. I certainly don’t want to see a situation where every single human error results in a fine from the ICO. The Data Protection Act is now, however, a 13 year-old piece of legislation, and organisations have now had two years to ensure they comply with the law in this regard. If HMRC operated on a mainly notice to improve basis, I’m fairly sure we’d see a significant decline in tax revenues (i.e. non-compliance). They don’t however, and operate a sliding scale of penalties depending upon the severity of mistake/ non-compliance. For mistakes on VAT returns, for example, penalties are categorised as follows:

  • Careless : you failed to take reasonable care
  • Deliberate: you knowingly sent HMRC an incorrect document
  • Deliberate and concealed: you knowingly sent HMRC an incorrect document and tried to conceal the inaccuracy

In each category there is also a sub-category: promoted and unprompted (i.e. HMRC discover the discrepancy or you notify HMRC of it). The fines vary from 100% of the tax due for deliberate and concealed and prompted to 0% for careless and unprompted. I’d like to see a similar sliding scale defined for data protection offences and then these enforced, since I simply don’t believe there has only been 7 serious data protection offences in the UK since April 2010. I’ve come across more than that myself this year alone, ranging from the potentially criminal to the careless error.

I’ve also, over the course of 2011, become convinced of the need for a mandatory data breach notification laws for the UK, as is already the case in some US states. The naysayers are already out in full force in the UK, saying the UK doesn’t need another piece of legislation regulating business. It is worth bearing in mind that this legislation originated in California – that US state well-known for over-regulating businesses and stifling innovation -not! Similarly, the criticism of data breach notification laws is not based upon any real-world experience. A study from the University of California-Berkeley of views from CISOs in the US, showed that data breach notification laws has put data protection and information security firmly into the public eye, and actually fostered dialogue in some cases between the consumer and data controller regarding their data. It also empowers consumers to protect themselves, either by asking awkward questions of their data controllers or by simply shopping elsewhere. We need this raising of awareness and dialogue in the UK too. Why should we either trust or trade with an organisation that doesn’t safeguard our privacy?

Phil Stewart is Director, Excelgate Consulting & Secretary & Director, Communications for ISSA-UK


The Apple of my i

December 7, 2011

As 2011 draws to a close, inspiration for the way forward in the information security industry is drawn from the past and a man outside of it: Apple’s former CEO – Steve Jobs, who passed away on 5th October, 2011.

Apple: Think different

Apple’s iconic logo is as well known as their products are globally today. Many urban myths have sprung up over the years over the origins of the Apple logo: from the apple of knowledge from the Garden of Eden; that the logo resulted from Steve Jobs having worked in an apple orchard; to the death of the founder of computer science, Alan Turing, by biting an apple laced with cyanide. An interview with the logo designer, Rob Janoff in 2009 revealed that none of these were the inspiration for the bitten apple – it was purely a design decision to give the logo scale and make the apple instantly recognisable as such as compared to other fruit, such as a cherry. This version of the logo was used by Apple from the mid 1970s until 1998; the “Think Different” slogan was retired in 2002.

It was interesting to see the huge number of tributes for Steve Jobs, when he passed away in October this year. Has there ever been such a response for the death of a technologist? Steve Jobs’ death produced tributes from people from every walk of life and all corners of the globe: from Presidents to Prime Ministers; from rock stars to every day users of their products. Steve Jobs was a special person, who instinctively knew what his customers wanted – without asking them. He saw the value of technology as an enabler: in improving people’s lives- but he absolutely understood – that most people are not fascinated by the technology in itself but what it can do for them in their everyday lives (in the same way the motor car was an enabler in the last century: few people are concerned with how they work).  Apple’s products all have an underlying intuitiveness about them that really does allow their users to unplug and use them straight away– without assuming any technical knowledge hitherto.

Most of us, however, do not possess that instinctive insight that Steve Jobs’ had – we have to do our market research with our target audience first. Historically, many information security professionals will not engage with end users at all – since these are the people -they believe – will ask for things they can’t have and do all the things they don’t want them to do.

Some years ago I was brought into an American Investment bank as project manager, on a desktop platform refresh program that had previously failed miserably in its objectives. Successive project managers had come and gone on that project, and it always seemed a case of one step forward and two back previously. Typically – for the IT department – the team were kept out of sight and out of mind in the lower dungeons of the bank: safely away from daylight and customers. There wasn’t even a telephone for the team – all communication had been done via email previously!

I decided that as well as talking to the head of each business unit to determine what was their requirements were, I would also – shock horror!- talk to the end users of each team to determine and capture which applications they used and how they used them in their day to day tasks. I also initiated a series of end user tests, and ensured that a representative from each team came down and tested the applications before the desktop builds were approved and shipped back to the respective end-user desks. When business managers asked why we would be asking for 30mins –of one staff member’s time for testing, I explained to them that this was improving staff productivity, by reducing helpdesk calls and eliminating the need for recalled failed builds. This strategy payed off: not only did we retain this business we went onto to win further rollouts for other parts of the bank.

This year I’ve heard and seen things which beggar belief. A consultant proudly boasting that the organisation he was contracted to work for “deserved data breaches” because “their staff were uneducated” (worse still, he meant this in a generic sense, not specifically to information security best practice education). I’ve also heard all security vendors being branded as “snake oil vendors”. An interesting concept – I don’t think we’d have much of an industry without security vendors, and I’ve come across one or two unscrupulous practitioners in my time who have a scant disregard for data privacy themselves to whom the disingenuous adjective could easily apply.

Whilst there certainly are some security vendors around to whom the adjective “snake oil” can easily be applied to (and a reputable re-seller recently reminded me of one): those that have little respect for their customers’ product feedback; who are in the business purely to make money without advancing genuine information security; and whose products are so desperately clunky to use that they require reams of documentation to use them; that greatly reduce user productivity and encourage their end users to find a workaround, and thus bypass security policy. Equally, however, there are some innovative vendors on the market that are genuinely interested in advancing information security, by helping develop new standards; thinking of helping the SME community by taking away the laborious task of log oversight from them and outsourcing it to specialists; or helping to secure the use of the cloud. I’ve come across all these types of vendors too this year. To label all security vendors in the same fashion is not only disingenuous to all vendors but also rather childish.

Earlier this year when I interviewed the UK’s Information Commissioner, Christopher Graham, for the ISSA, he remarked how he felt that end users were just not getting the message regarding data protection.  Too often we see the same old problems: users not being educated, making basic mistakes. Personally, I think we have an industry that’s geared up for messaging aimed mainly at board and manager level and around legal compliance, so is it any wonder? Who is teaching the end users how to handle personal data correctly, and what should and shouldn’t be stored regarding credit cards on a day to day basis in their jobs? Similarly, I’ve always disliked the industry term “evangelist” – widely used in our industry – since that implies preaching! Who on earth likes being preached to? Perhaps that’s why few end users are listening.

We urgently need an approach where information security professionals think about being business enablers, whilst enhancing security, and can talk in a language that their end users understand. For twenty years plus now, we’ve been thinking that all our problems will be solved if only we throw more technology at it. Yet still we see data breaches. Similarly, we need security products that are focused at improving end user productivity, rather than working against the business. Then users might stop looking for workarounds, to both the solutions and hence their security policy.

If only Apple did iSec!

Phil Stewart is Director, Excelgate Consulting  and Secretary and Director, Communications for ISSA UK.


The Insider Threat

September 5, 2011

Last month the Information Commissioner, Christopher Graham, gave an interview to the ISSA, ahead of his address to the ISSA later this week, and looks at how most data breaches start with an employee from within the organisation:

In 2006, the ICO uncovered the organised and illegal trade of confidential personal information in the report, What Price Privacy? How widespread do you believe this problem is today in the UK?

“I’ve described it as a modern scourge. The headlines, both in 2006 and more recently, have all been about the behaviour of the press, but I think it goes much further … Basically we’ve got pretty systematic trashing of our rights under the Data Protection Act.  My predecessor, in flagging the blagging, made the case for a much stronger penalty that would act as a deterrent but also send a very strong signal that data protection offences are not a victimless crime. It’s very important now that parliament gets on and activates section 77 of the Criminal Justice & Immigration Act 2008 which allows for the custodial penalty of up to 6 months in a magistrates court and up to 2 years in the crown court, but has not been commenced. It wasn’t commenced because of a stand-off between the politicians and the press.  I think we can now get through that, because the terms of the debate have changed a bit.  This isn’t something that should wait for the Leveson Inquiry because frankly it isn’t about newspapers – it’s about debt recovery, claims management companies, matrimonial disputes, child custody battles, you name it.”

Yes, I remember when you became Information Commissioner in 2009, at a Parliamentary select Committee you re-iterated the need for a custodial sentence for convictions under section 55 of the Data Protection Act.

“Indeed, I did.  I didn’t get very far, though, because basically the politicians and the press had agreed this was going to be a sort of Sword of Damocles hanging over the press and if they misbehaved then it would be activated. I think the whole point was the 2006 report – yes, it talks about the behaviour of the tabloid press because the particular private investigator that the ICO raided, that was his main line of business.  But that’s  not what all the report was about. The idea that you don’t have to take any action against staff in NHS walk-in centres selling information to claims management companies because of some arcane dispute about investigative journalism is clearly nonsense. I didn’t get very far two years ago but I’m determined to go on pushing. There’s the human factor in all of this. We can have wonderful systems and policies for data protection and data security, but if the men and women on the ground don’t take it seriously and don’t think it matters – none of those systems are going to work. A small fine in a magistrates court is simply not a deterrent. I think understanding that you might go to prison is more like it, but it also enables the courts to look at the whole range of possible penalties which might be somewhere between a small fine and the threat of going to prison or having a community sentence.”

I noticed that in the ICO’s latest annual report – you mentioned the NHS – as an organisation they had the largest number of data breaches

 “They are about the largest organisation so I’m not surprised by that – they are quite good at reporting breaches, it’s part of their procedure. I recently met with the chief executive of the National Health Service,  Sir David Nicholson.  We had a very good, workmanlike discussion. There’s a lot of change in the NHS and that makes for a particularly dangerous time but you’ve got to distinguish between the trucks and the tracks. We’re much better at thinking about the trucks: these are the great security initiatives and projects. The tracks are the routine: the day-to-day. My experience of a lot of organisations, not just the NHS, is that the messages haven’t got down to the grassroots. Data protection is seen as the sole concern of a few geeks, and as a result terrible things happen.  People have heard the messages but haven’t internalised them. Every week I’m dealing with laptops going missing, not encrypted; portable devices and papers left on the bus; sensitive files dumped in a skip. In the health service of course by definition the information is almost certainly going to be sensitive information so we’re working very closely with the NHS so that they can get the big things right – summary care records etc, but they can also get the smaller things – which actually aren’t that small – such as persuading the receptionist in the GP surgery you don’t give things out over the phone just because somebody rings you up and sounds persuasive.”

What can be done in the health sector and other sectors to generate a culture where data protection becomes second nature, rather than seen as an annual event, or a burdensome task?

“I think organisations both in the public sector and the private sector have so much at stake in terms of their reputation, which of course in commercial terms you can put a value on, and in the public sector it’s all about the threat of reversing all the work that’s gone into citizen engagement.  The fact that it’s a real issue at the top of the organisation means that the message then needs to be taken to the whole organisation: it’s not just something of peripheral concern. Yes, it’s about training, but then it’s about auditing, it’s about going back and making sure people are practising what they know they are supposed to be doing – it’s about mainstreaming the whole thing. It should be absolutely part of the performance review system. We shouldn’t have the situations we’ve had over the past few years or so where people dealing with very sensitive information are treating it in such a cavalier way.  Our first civil monetary penalty was imposed on Hertfordshire County Council where they’d faxed highly sensitive court papers in a child welfare case to what they thought was Watford County Court but unfortunately it wasn’t and they’d got the number wrong. You wouldn’t do that if you were thinking about what the material was you were handling, that it was very sensitive, personal information about vulnerable children, so faxing wasn’t a very good idea anyway. You needed to have made sure you had got the right fax number, and that someone was waiting for the fax at the other end and that you didn’t just have finger trouble and were about to send it elsewhere. The civil monetary penalties which we’ve had at [the ICO’s] disposal since April of last year have certainly had a sobering effect and made people sit up and take notice. We’ve only imposed 6 of them – we’re not trigger happy – but it’s certainly made it very real to organisations who have focused on the reputational damage – being hit with a penalty.”

Following the use of social networking sites such as Twitter to reveal the details of super-injunctions earlier this year, the Prime Minister has called for a review of Data Privacy legislation in the UK. Will the ICO be contributing to the work of any new parliamentary committee in shaping any new data privacy legislation?

“Well, the Data Privacy legislation will be reviewed anyway in the context of the European Directive – that’s a process that’s going ahead. The Commissioner Viviane Reding is leading that process and the ICO is very much engaged with our European colleagues in the Article 29 working party – we’ve been inputting into that study. We expect to see a draft of a directive in about November and then the legislative process will follow. In a few years’ time there will be changes to data protection law, because the directive on which it’s based will have changed. We don’t think there is much wrong with the principles, but we’re looking for legislation that is much more modern and realistic in terms of what actually happens in the world of global information exchange.”

 

The legislation has been behind the technology in terms of usage of it, hasn’t it?

“Absolutely. It’s very important that Brussels produces a legislative proposal which is reasonably future-proof – if you get the principles right then the principles can take the technological changes on board. If you’re overly prescriptive then you’ll come up with something that is highly relevant for 2013 but by the time it comes into law it’s probably outdated because of all of these other developments. The ICO, working with anyone who will listen to us, is stressing the accountability principle – that the legal responsibilities lie with the data controller, and that the role of the data protection authority is to regulate that relationship and intervene as and when necessary on the basis of risk, rather than pretend the data protection authorities can be like some latter day King Canute holding back the waves, and let’s not kid ourselves that no information moves across borders without some tick in the box from the Data Protection Authority. The current Directive, of course, is pre-cloud, but it ought to be clear that’s it’s a very outdated text that doesn’t take into account the realities of the modern world.”

Would you like to see greater powers for the ICO, such as the power to audit an organisation to investigate a serious data breach?

“We gained some extra powers under the Coroner’s & Justice Act, and we have found that doing consensual audits is going really well, more in the public sector than the private sector. We can run the ruler over a company’s compliance which can then be a badge of pride: “we’ve been checked over by the ICO”. There are powers to compulsorily audit government departments. I will go as far and as fast with the existing powers that I’ve got, but if I come to the conclusion that I’m not able to get anywhere-  that I can‘t audit  organisations – then I will certainly return to the Secretary of State. I can get warrants – I signed a warrant today. It would be more satisfactory to require an audit, probably as part of an undertaking to improve.”

At a recent ISSA chapter meeting, one speaker remarked that social engineering over the phone is often the seed for an attacker that allows them  either to guess a password or a weakness in a system or a process to exploit.  Many of the data breaches we have touched on this afternoon all start from an internal employee in an organisation. Do you think we are doing enough to educate employees in organisations to create a security culture?

“No I think we’re not and you absolutely put your finger on it when, in relation to this row about hacking, you have to ask the question: ‘how is it possible for the phones to be hacked?’ and the answer is: somebody has blagged, which they shouldn’t have done. That gets us back to section 55 of the Data Protection Act – it’s just too easy to blag and the penalty isn’t very impressive if you get caught. So I believe very strongly we’ve got to push for that [a custodial sentence] and I’m trying to get the politicians to see that this is something we need to do anyway – it’s going to take some time. Frankly we can’t wait [for the Leveson Inquiry] . We’ve got information leaking from databases, every day – and not, as I said earlier, to journalists particularly – because information is valuable and it’s making people a lot of money.  That’s the root of our problems – so if we’re concerned about cyber security then getting these basic things right is absolutely essential, and members of staff in all organisations need to see the connection between something which seems to them as a bit naughty, but not terribly bad, and the terrible things that happen as a result.”

Christopher Graham – thank you for your time this afternoon and we look forward to your address at the ISSA next month.

 

The Information Commissioner, Christopher Graham, was talking to Phil Stewart, Director, External Communications, ISSA UK. Christopher Graham will be addressing the ISSA at their next meeting on 8th September 2011 in London.

 



Stemming the Tide of Data Breaches

July 28, 2011

2011 has seen a steady stream of attacks and data breaches at a whole host of well-known, large organisations: Sony, Citibank, RSA, Google, Epsilon Marketing, and more recently, the Pentagon. Why are we seeing such a steady stream of major information security breaches at large organisations? Who can we trust with our data? What can be done to remedy the situation?

Data Stream, Data Breaches, Binary Data

2011 has seen a steady stream of data breaches at large organisations: from Sony to Citibank, to RSA, Google and Epsilon Marketing and the Pentagon. Lockheed Martin also thwarted an intrusion attempt into their network to steal data.
Image: © Junede - Fotolia.com

Lack of Board Engagement

Opinion differs in the industry as to whether the fault lies with the information security industry itself in convincing board level of the need to act; or whether boards are actively engaged with information security professionals in the first place. Harvey Nash’s CIO 2011 Survey published earlier this year generated responses from 2,000 CIOs and industry leaders worldwide. Its findings showed that 50% of respondents sit at the operational management or board level of their organisation – meaning that the other half do not.

When Sony’s breach involving 77m Playstation user details came to light, it was subsequently revealed that Sony did not have a CIO/CISO at the time, and indeed for a number of years preceding the attack. It’s not just sufficient for board visibility of information security but also that the CIO has genuine influence and is able to raise awareness to the board and influence decisions on a regular basis. Good information security awareness starts from the top : you cannot expect your employees to have awareness of an issue that has no visibility at board level.

Taking a Holistic View

It is important for every organisation to take a holistic view of information security (technology, processes, people) and not focus on a sole product, standard or the “next greatest thing” and believe that all will be well. The culture of chasing the current standard or solution, currently in vogue, has been largely vendor-driven, where advice is given with a heavy slant towards their own solutions as the sole solution. I’m keen on vendors and professionals who see the bigger picture beyond their own product or standard or area of expertise and are keen to educate, or in those vendors who, in developing their own solution are keen to help improve upon existing standards. I’ve been particularly impressed with Sophos in this regard, who regularly give out advice via their security blogs on a whole host of issues ranging from Facebook scams, phishing attacks, credit card scams, and botnets. They are to be commended not only for taking a holistic view of the industry but commenting on the various social media scams for which social media users clearly need educating. It also did not surprise me that Sophos won the award for best speaker at the recent ISSA event onboard HMS President.

To illustrate why a holistic approach is important – for years Intrusion Detection & Intrusion Prevention systems for years has been sold as the panacea for detecting all malicious intrusions on your network. Without proper examination and collation of these logs on a regular basis – having IDS / IPS alone is nearly as bad as no IDS/IPS at all. Consider a “go-slow” attack where an attacker tries to gain access using 2 logon attempts per hour. Typically, this would not trip either an account lockout situation or an IPS detection – it needs some intelligence behind it to raise that alarm (be that human or automated – via SIEM ). For a large organisation, do you really expect a human operator to sift through thousand of event logs looking for a needle in an electronic haystack? Are you properly and intelligently monitoring your logs and can you take evasive action quickly should you come under attack? (rather than after the data has bolted, as is frequently the case).

Avoiding the “Checkbox Culture” that Standards Compliance Alone Generates.

There’s too much noise and focus on standards compliance in the industry in the mistaken belief this alone will generate security. It doesn’t: when taken in isolation, it generates a false sense of security. Information security cannot be seen as an annual tick box event, with a string of recommendations and good intentions: to be done at some later date.

Whilst standards compliance is a necessary part of good governance, the industry really should be talking about generating good security cultures. An interesting study would be of those companies mentioned above (and others) which have suffered a breach, the percentage which had recently undertaken compliance with a particular standard, combined with whether they have a CIO/CISO in place and had regular staff training in place. It would make interesting reading.

A security culture is something that starts in an organisation from top-down: the board is updated at regular and frequent intervals about what is being done across the organisation – what business processes need improving and what staff education programmes are in place or are being updated. CIOs / CISO should be constantly improving their skill sets and awareness by attending conferences, reading the latest security articles and being aware of innovative solutions that challenge the established way of thinking in the industry. The human factor – and education of staff is an area that is often overlooked: in the Information Security Breaches Survey of 2010 by PWC, it showed that 80% of large organisations reported an incident caused by staff, yet very rarely do we hear of the need to regularly educate users. It simply isn’t good enough to keep blaming staff if you don’t have a regular training programme in place. It also isn’t good enough to “educate and forget” i.e. only train when a new person joins an organisation and never again – there needs to be a programme in place to educate users at regular intervals – to accommodate new threats, changes to legislation and best working practices.

Taking Professionalism Seriously

It’s long past time that our industry took professionalism seriously. Think of a visit to a doctor or a surgeon performing an operation. Would you let a surgeon operate on you who hadn’t bothered to attend medical school or didn’t think the exams were “really that important or necessary”? There certainly are some bad doctors out there, but the reverse argument of not bothering with professional qualifications to practice in the medical profession doesn’t hold water. Yet that is exactly what a small portion of our industry is doing!

As if to illustrate the point, last week I heard a hilarious story from a journalist who told me how he had uncovered someone who had been blagging their way around the industry as a “security consultant” (incidentally the journalist has given permission for me to repeat this, and the information was obtained via publishable sources and not via phone blagging or phone hacking!). Not only did the ‘consultant’ have no information security qualifications or certifications, but he had previously been working in… the hair products industry!  After containing our laughter in the restaurant, I remarked: “Securing hair braids yesterday, securing data tomorrow!” It would be a funnier joke if it wasn’t happening in our industry.

Would the medical profession allowed unqualified staff?

“Don’t worry sir, I’m fully unqualified! I’m good with people though and ...... I used to cut hair for a living, so you have nothing to fear. How hard can this medical thing be?”
Image: © Joel Calheiros - Fotolia.com

The industry needs to think about standardising on acceptable criteria for practicing in this field. I would propose that people wanting to employ someone in our industry insists on a CISSP certification as a minimum benchmark certification, as it demonstrates many of the areas previously discussed – such as taking a holistic view, relevant experience, and it encourages and requires constant improvement and education. It also demonstrates a commitment to the industry. Getting information security wrong can have a really serious impact on your business, and it certainly isn’t about just selling security solutions as a quick ‘fix’. I would also urge organisations to ensure that people at all levels are qualified – from CISO/ CIO & CTO down: it’s not good enough to ensure your junior staff are qualified whilst your security leaders are not – lead by example.

Changes to Legislation Are Required

I both welcome and support a change in the law to include mandatory breach notification for the UK – as is already the case in US states such as California. I would like to see also as part of filing a company’s annual accounts or statutory annual return a list of security measures they are taking / will be undertaking to safeguard personal data in their organisation. If there’s a statutory requirement to report annual financial accounts, why not something (albeit more sophisticated) in place for information security as well? If people have to sign off on security measures that subsequently turn out to be false or inadequate (and face subsequent prosecution), it may just make boards wake up that inaction is not an option and that people’s data and privacy is something we value as a society. Granted, this alone isn’t going to be a panacea or an easy thing to legislate (a one size fits all policy for all organisations is not appropriate in terms of their obligations – but then we already accommodate different annual accounting requirements with the Companies Act of 2006 for different sized organisations).

Without enforcement, legislation alone is unlikely to succeed in changing culture. I would also like to see stiffer penalties for breaches of section 55 of the Data Protection Act enforced in the UK. In May 2006, the Information Commissioner in the UK published a report “What Price Privacy?” which uncovered the illegal trade in personal information, with a follow-up report published 6 months later. The act of blagging, (which has been the root of all the problems with the phone hacking scandal) is a criminal offence under section 55 of the Data Protection Act 1998. Currently, however, it carries a fine of up to £5,000 in a Magistrate’s Court but does not include a custodial sentence. Whilst Section 77 of the Criminal Justice and Immigration Act 2008 (CJIA) subsequently included provision for a custodial sentence of up to two years, this provision cannot come into effect until the Secretary of State makes a relevant order. Whilst the recent phone hacking scandal has focused on some journalists using blagging to obtain personal information, the “What Price Privacy?” report of 2006 showed the practice is far from confined to the journalism sector – this is just the visible tip of a much bigger iceberg lurking below – and I urge people to read that report from the ICO.

To conclude, whilst I don’t share the pessimism of some in the industry by the same token I’m not complacent either in thinking that there isn’t much still to be done: both in changing and enforcing the law, educating both board members and employees and ensuring the industry thinks holistically as individuals and organisations.

Phil Stewart is Director of Excelgate Consulting and Director of External Communications, ISSA UK


Twitter Tourettes

June 15, 2011

Twitter celebrated its fifth birthday earlier this year. It has revolutionised the way people interact in the 21st century: being at the heart of both revolutions across the Middle East and the centre of a media and legal maelstrom in the UK regarding super injunctions. It, along with FaceBook, has become the mode of communication of choice for the “Google generation”. The appetite for both instant news and celebrity gossip have both proved insatiable and brought with it a whole host of dangers to both individual and organisation for the unwary and unprepared.

"Twitter Tourettes" elearning "Social Networking Security" eCourse

Twitter celebrated its fifth birthday on 21st March 2011. Twitter users now send more than 140 million tweets a day.

Twitter Tourette’s, to coin a phrase, is used in this context to describe a phenomenon which has been increasing on the Internet regarding a (seemingly unknown) compulsion to publicise private or inappropriate material online (not to make light of Tourette’s Syndrome, in which 10% of people with Tourette’s swear uncontrollably). Serious thought needs to be given to not only the use of social networking tools within the workplace, but training to employees as to how they use it within their private lives.

High Profile Cases:

Bomb ‘joke’ on Twitter results in conviction under the Communications Act 2003

On 6th January 2010 Paul Chambers, a 27-year old accountant, posted on Twitter a ‘joke’ regarding threatening to blow Robin Hood airport “sky high” . He was arrested at work a week later and subsequently convicted of an offence under section 127 of the Communications Act 2003 (for “sending a public electronic message that was grossly offensive or of an indecent, obscene or menacing character”). Paul was fined £1,000 and lost his job as a result of the tweet, and lost his subsequent appeal against the conviction at Doncaster Crown Court.

Press Complaints Commission rules Twitter messages are “not private”

Sarah Baskerville, a Department of Transport official, filed a complaint to the press regulator, arguing that her tweets regarding comments about being her being hung-over at work were private for her 700 Twitter followers, and not meant for publication in the press (the Daily Mail and Independent on Sunday both reported this story). The Press Complaints Commission ruled that as Twitter was publically accessible and that the potential audience was actually much further than her own followers, (since messages on Twitter can be re-tweeted to others) the publication of the story in the press did not constitute an invasion of privacy.

IT Consultant unwittingly tweets details of the raid on Osama Bin Laden hide-out live

An IT Consultant, Sohaib Athar, living in Abottobad, Pakistan, was the first person to unwittingly tweet about the raid on the hide-out of Osama Bin Laden. He reported live as events unfolded that a helicopter was hovering over Abottobad, followed by a loud explosion. He pondered: “Since Taliban (probably) don’t have helicopters, and since they’re saying it was not “ours”, so must be a complicated situation”. Later that day, after the White House Press conference on the raid, it dawned on him he had tweeted the operation live: “Uh oh, now I’m the guy who liveblogged the Osama raid without knowing it”. Within hours of his initial tweets his followers surged by over 15,000.

European head of Twitter indicates revealing to police details of Twitter users who broke superinjuction

Lawyers acting for a UK Premiership footballer, filed court papers against Twitter and a number of its members last week after they allegedly broke the terms of a “super-injunction” banning publication of details of his private life. Tony Wang, the new European head of Twitter, has indicated that it could give police details of users who broke the gagging order, in line with its global policy for dealing with legal requests. In a statement at the e-G8 forum in Paris, Mr Wang said that it was Twitter’s policy was to comply with local laws to hand over details where it was “legally required” to do so.

South Tyneside Council acts in US Court to reveal identity of Twitter users behind allegedly libellous statements

South Tyneside council went to court in California to request Twitter release details of the identity of five twitter users who were allegedly libelling a number of councillors at South Tyneside Council via Twitter. The “Mr Monkey” blog had made a number of accusations against the council leaders. Council spokesman Paul Robinson has revealed information has been disclosed by Twitter to its lawyers including IP addresses and email addresses.

A Tweet is a publication, not a private message!

Twitter should be regarded as a publishing platform, not a means of private communication. All the examples mentioned illustrate  the dangers of the unguarded use of Twitter and social networking sites, from revealing the details of secret military operations; opening the way for legal action either by defamation or breaking the terms of a gagging order; uncontrolled release of new media into the press; or damage to reputation: corporate or private.

New e-learning Course

Excelgate Consulting has teamed up with Ira Winkler & VigiTrust to provide an e-learning solution training course: Security of Social Networks. The e-learning solutions can be run using a standard web browser and completed in stages at the participant’s pace.

elearning eSec Security of Social Networks

The Security of Social Networking elearning course model: introduction.

Upon completion of the course, your employees will participate in a test to determine their awareness and upon passing the course will generate a certificate. On successful completion of the Security of Social Networks course, users will:

• Be able to distinguish between direct and indirect attacks from hackers and other unscrupulous individuals and how to avoid exposure to them

• Recognise the threats posed by seemingly inconsequential personal or confidential work information and identify the various ways in which criminals may exploit social networks

• Gain a good understanding of the main features of the major social networking sites, and how careless activity can impact negatively on corporate applications and customer sensitive information

The new Security of Social Networks e-Learning course is available now, and a short demo is available here: . Please contact us for further information.


ISSA 5173: A New Security Standard for SMEs

March 17, 2011

Last week at our chapter meeting, ISSA UK published a new standard specifically designed for Small and Medium Enterprises (SMEs): ISSA 5173. It is the result of a workgroup of 30 information security professionals in the ISSA.

ISSA 5173

A year ago, at March 2010’s ISSA AGM and chapter meeting, David Lacey, Director of Research at ISSA UK presented the need for action in the SME community regarding information security. This resonated with many members of the audience, and as a result, a workgroup was set up comprised of vendors, consultants, directors, researchers and chief security officers.

SMEs (typically defined as 250 employees or less) make up 99.9% of the UK businesses, according to BIS’s Small and Medium Enterprise Statistics for SMEs, published in October 2010, and account for 49% of the UK turnover. Too often, the need for information security within SMEs is regarded as a “grudge purchase” and too often is perceived as “someone else’s problem”. In the UK, the definition of a small and medium business for accounting purposes is defined in the Companies Act of 2006. A small company has “a turnover of not more than £6.5 million, a balance sheet total of not more than £3.26 million and not more than 50 employees.” A medium company has “a turnover of not more than £25.9 million, a balance sheet total of not more than £12.9 million”. These companies clearly have intellectual property of considerable worth, and there is a clear need for SMEs to take action: not just for the sake of compliance but also to safeguard their own assets. Information in the 21st century has considerable value, and the loss of it, as recent events have shown, can be very damaging for reputation. In the case of a small business, it may well put it out of business.

Part of the problem has been a general lack of innovation in the industry in recent times. In creating this draft standard, whilst the ISSA has looked at existing standards, the consensus was that a fresh approach was needed to deal with information security for SMEs. Clearly, the situation for SMEs is not getting any better: in fact, it’s getting worse. In the bi-annual Information Security and Breaches Survey (ISBS) published in 2010 by PWC, in 2008, 35% small businesses surveyed had suffered a malicious attack; this rose to 74% in 2010. Similarly, the average number and cost of a security breach in a small organisation rose from an average of 6 incidents with the worst one costing an of average £20,000 in 2008 to 11 incidents in 2010, with the worst one costing £55,000 on average.

The way that security has been pitched to the SME has been completely wrong in the past and fails to understand the difference both in the key drivers for, and the way a small business operates. Large corporates have huge resources at their disposal in terms of time, staff and expertise – both in IT and in law. Their board is driven by compliance because they are both aware of appropriate legislation and the need to comply with it. SMEs by contrast, unless operating within our own industry, are unlikely to be aware of key legislation or have the necessary IT expertise to act on it. Drowning a small business in mountains of paperwork and a complex risk assessment is not appropriate to all but the largest SMEs. The drum of “regulatory compliance” as the pitch has no resonance either to the small business owner. The irony of course is that compliance with key legislation is not optional, and that the Data Protection Act applies to all businesses, large and small. Similarly, any company offering payments via credit cards need to review their PCI DSS compliance.

Even if the SME owner is inclined to do something about information security, where do they go for up to date guidance? There is certainly  a great deal of information available online, but it is spread across numerous websites; it is focused primarily at large corporates or government bodies where huge processes and large amounts of paperwork are the norm; and is often out of date and does not address current threats and security issues. The guidance too is often completely out of date – not by months but in some cases a whole decade. For example: is dial-back access security or cloud computing security considerations more appropriate for the security landscape in 2011?

In drafting the new standard the ISSA have looked closely at the status quo and decided that the only option was to create a new standard from scratch, specifically for the SME market. It has been written in language that is appropriate for the small business owner. Over the coming months we will be reviewing the feedback received after a consultation exercise. Moving forward the ISSA UK chapter sees the need to provide up to date guidance on the key issues affecting small businesses.

ISSA 5173 is the new draft standard for information security from the ISSA UK Chapter, and is available for review and download. Feedback on the new standard is welcome at: SMESecurity@issa-uk.org

Phil Stewart is Director of Excelgate Consulting and Director, External Communications, at the UK Chapter of the ISSA.