What Price, Privacy?

October 15, 2014

At IP Expo last week in London, the inventor of the World Wide Web, Tim Berners-Lee, outlined his vision for the future of the web, and data ownership in his keynote speech.

internet, web commerce, ecommerce, internet, future, connectivity, global connectivity, data sharing, data

© Victoria – Fotolia.com

Over the past few years there have been numerous concerns about big data and its use. Tim Berners-Lee argues that whilst he is perfectly happy for his personal data to be used by professionals where it is of benefit to him or mankind (e.g. healthcare professionals in a road traffic accident), this does not apply to commercial organisations for the purposes of targeted marketing. Tim argues that the owners of personal data are the data subject, not large corporations, and that we should have a say how it can be used and where it can be sold or merged with other data sets.

Others take a different view. An interesting panel discussion with Raj Samani, Andrew Rose & Josh Pennell ensued later in the conference. There was disagreement amongst these panelists whether the ‘reputational damage’ argument regarding data loss actually meant anything these days to organisations, since data breaches are seemingly ubiquitous and those outside the industry, especially the Facebook generation, simply don’t care if their personal data is lost. Time will tell if there is long-term damage to Target, for example from their breach, although early indications appear to show that the share price of Target has mostly recovered.

Earlier in the month, Raj Samani had argued in a webcast to Forbes, that as a consumer, he gives consent for his personal data being used and shared when he signs up for services, and that most consumers will happily do so, given there is consent, perceived value and transparency. After, all, services such as Twitter and Facebook are free for the end-user to use. Raj does concede however, that the terms and conditions are rarely clear, being large legal documents, and that few end users will actually read them. There rarely is effective transparency in these cases, if end-users do not realise what they are signing up to.

How Tim’s proposal might work in practice would be to change legislation and afford personal data the same status as copyrighted data: use and context requires specific owner consent and re-use requires a royalty paid to the data subject. It may also solve the forthcoming conundrum about the “right to erasure” in the new EU Data Protection legislation: if I ask to be removed from one database, in theory the deletion should cascade into third party databases, where the data had been sold on. It would also take away some of the burden from over-worked regulators like the ICO, who are severely under-resourced.

I’m sure many organisations will say such a model is unworkable, but it may just make organisations think about how and where they use our personal data, especially if over-use of personal data directly affected bottom line via royalty payments. 40 years ago, in the Cold War era, if you had suggested an interconnected network of disparate computers upon which banking, finance, government, education, retail and leisure all relied on, which would become both ubiquitous and intrinsic to our daily lives, the idea would probably have been dismissed as the work of science-fiction and unworkable. Yet in 2014, we rely upon the internet to sustain all these things and more. Our industry needs such radical thinking today for a new model of data protection.

Phil Stewart is Director, Excelgate Consulting


Compliant But Not Necessarily Secure? How HR Can Help..

September 25, 2014

As the recent Target breach has shown: being compliant alone does not necessarily mean inherently secure. As a recent analysis of the theft of personal and financial information of 110 million Target customers by the United States Senate illustrates, key warning signals that Target’s network was being infiltrated were missed in the organisation. Both anti-malware and SIEM identified suspicious behaviour on their network, yet two days later the attackers began exporting customer data from their network.

HR - business functions, business operations, business processes

© Rawpixel – Fotolia.com

The fact that Target was certified as being compliant with the PCI DSS standard just two months before the breach took place has certainly caused a lot of debate in the industry. I’ve always argued that simply persuing an annual ‘checkbox exercise’ is not enough: there must be a lasting and on-going cultural awareness with regards to data security.

I’ve also long argued that whilst responsibility and drive for data security lie at board level; the entire business needs to be both aware and on-board too. That’s why HR is vital to achieving the role. KPIs for data security, along with employee training programmes need to enhance the goals of the CISO. Data security KPIs should be SMART: Specific, Measurable, Achievable, Realistic and Time-orientated. Across all business units there should be at least annual (and perhaps quarterly, depending upon the nature of the business) employee awareness training programmes.

The KPIs should be pertinent to the area of the business unit they operate in and relevant to the data security standard your organisation is working towards gaining (or maintaining) being compliant with. For PCI DSS, for example, the requirement to ‘Protect Cardholder Data’ clearly has different implications for different business units. For a call centre, for example the requirement should be around not storing PANS in a readable format; not recording CV2 numbers when taken over the phone; not writing cardholder details down on bits of paper for subsequent entry; and the secure physical storage of paper records. For development, this would pertain to: encryption methods used and key storage and protection; how PANs are rendered unreadable electronically and adherence to the standard; how cardholder data is transmitted across networks. For IT support, these could relate to maintaining up to date anti-malware and intrusion prevention systems; hourly monitoring of SIEM information; weekly reports to senior management concerning monitoring status and patch levels.

Whilst line managers are usually responsible for setting KPIs in a business, I strongly believe HR can make a valuable contribution: by enforcing company policy in ensuring all line managers include data security quarterly KPIs in their target setting. A function of HR in a business is to safeguard the reputation of a business, something which is also the function of an information security professional.

A recent study by insurance firm Beazley analysed more than 1,500 data breaches it services between 2013 and 2014, and discovered that employee errors accounted for 55% of them. HR is also about people management and defining the culture of an organisation. Whilst I’m not suggesting that HR take on responsibility for information security, they certainly have a part to play in ensuring that the correct mindset operates across all parts of the business. Changing an organisation’s culture requires a sustained effort from numerous players across the business: and HR is one of the key ones.

Phil Stewart is director of Excelgate Consulting.


Bring Your Own Pandora’s Box

May 1, 2012

Bring Your Own Device (BYOD) seems to spreading in the industry as a debating point in the industry like a virulent virus. Is it fast becoming the 21st century version of the “Pandora’s Box” for the workplace?

Bring Your Own Device - BYOD = Bring Your Own Pandora's Box

In ancient Greek mythology the “Pandora’s box” was a box which contained all the evils of the world. Compelled by her own curiousity, Pandora opened it, and all the evil contained therein escaped and spread all over the world. Is “Bring Your Own Device” (BYOD) the 21st Century equivalent?
Image: © foaloce - Fotolia.com

I’ve long been an advocate of information security professionals adopting a more flexible approach when giving end-users the functionality and flexibility they want: a happier user is a more productive user. Indeed, a recent study published by O2 last month [1], the largest ever flexible working pilot of its kind in the UK, found that workers were more productive, happier and richer if they did not go into the office.

Users want to experience the functionality, applications and interfaces that they use and are familiar with in their private lives. They want devices that allow them to work remotely from outside the office and on the move. All things can be taken too far, however; and this is exactly what is happening with BYOD. It seems that some organisations appear intent in throwing out the corporate security (baby) out with their data (treating it as bathwater).

BYOD should be treated with the same three basic principles as with any other information security issue: confidentiality, integrity and availability. Vast sums were spent on secure laptop rollouts in the noughties in order to give workers the ability to work from home and on the road securely. Did we ask/ allow users to bring their own home laptops (along with their malware!) and just plug them into the corporate network and hope for the best? Of course we didn’t!

Some good reasons why BYOD may not be such a great idea:

  • What happens if the device is lost or stolen from the end user? Along with the device goes (unencrypted) corporate data and contacts and an inability to remote wipe/ kill it
  • Removable media is easily targeted by would-be remote hackers as an easy way of bypassing network firewalls: it the assumed route for example for the Stuxnet virus for example.
  • Smartphones usually nowadays have both voice recording and high definition camera capabilities: easily allowing for industrial espionage. There’s good reason most high-security organisations don’t allow them on their premises.
  • There are legal ramifications in most geographies in monitoring employee’s personal web behaviour.
  • Laws on employee monitoring of personal data/ behaviour varies from geography to geography: this is a potential headache for corporate risk departments.
  • How will you monitor the mobile devices for malware, and prevent its introduction into the corporate network?
  • How will you identify which device is ‘rogue’ and which is ‘approved’ in a true BYOD environment?
  • How do you monitor for acceptable use on an employee-owned device? (assuming there are no legal blockers)
  • How will you prevent the employee walking out the door with your customer and supplier contact lists?
  • How do prevent illicit/unlicensed content from entering the corporate network?
  • Which devices are deemed to be supported and unsupported?
  • Does your IT department have the right skills to support cross-platform mobile applications in a secure fashion?

The more enlightened organisations are of course doing a sensible compromise: rolling out well-known tablets to their workforce in a controlled manner, whilst not allowing private devices to access the corporate network. According to Apple, 9 months after they launched the iPad, 80% of Fortune 100 companies had either deployed or were piloting the device. [2]. When deploying mobile devices you should consider all the areas you would have done for a laptop device and more:

  • How will the data on the device be encrypted?
  • How will you transmit data encrypted to your internal network and external service providers?
  • How will you ‘remote kill’ the device should it be lost or stolen?
  • How will you prevent malware and inappropriate content entering the network.?
  • How will the user authenticate securely onto both the device itself and also the corporate network?
  • How will you identify vulnerabilities on the mobile Operating System?

It is perhaps my last question which causes me the most concern, since some in the industry appear to think that certain mobile platforms are not open to the vulnerabilities and exploits in the same way as their desktop and server counterparts. A recent mystery shopper exercise has confirmed my opinion: most do not take the threat posed by mobile devices as seriously as they should. The SANS Institute think otherwise: a whitepaper published them by in 2011 [3] stated that whilst Apple has done a great job by allowing only digitally signed software to be installed on a non jail-broken iPhone; there were still vulnerabilities that could be exploited, particularly in the web-browser (Safari).  Recently this year Reuters reported on a flaw on the operating system of the Android operating systems which allowed hackers to eavesdrop on phone calls or monitor the location of the device. [4]

In order to reduce the risk posed by BYOD, organisations should develop an acceptable usage policy for tablets which clearly identifies the steps end-users should take to complement the security that has been built-in to the device by the IT department. This should include sensible advice on the dangers of allow shoulder-surfing when logging on in a public area; ensuring the device is not left in a public place; regular network connectivity to the corporate network to allow regular updates.

The sensible compromise of rolling out familiar, but secured, tablets in the workplace combined with an acceptable usage policy, will prevent your employees seeing the management as “tone deaf” and may well realise significant increases to both end-user productivity and creativity in the workplace; whilst maintaining a sensible approach to safeguarding against the unguarded use of mobile devices in the workplace.

Phil Stewart is Director, Excelgate Consulting & Secretary & Director, Communications, ISSA UK

Sources:

[1] Working From Home More Productive, Telegraph 3rd April 2012:
http://www.telegraph.co.uk/technology/news/9182464/Working-from-home-more-productive.html

[2] Morgan Stanley: Tablet Demand & Disruption:
http://www.morganstanley.com/views/perspectives/tablets_demand.pdf

[3] SANS Institute: Security Implications of iOS:
http://www.sans.org/reading_room/whitepapers/pda/security-implications-ios_33724

[4] Android Bug Opens Devices to Outside Control, Reuters, 24th February 2012
http://www.reuters.com/article/2012/02/24/us-google-android-security-idUSTRE81N1T120120224


2011: The Year that News Made the News

December 15, 2011

2011: A year that started with the continued aftermath of the WikiLeaks saga, and ended with the Leveson Inquiry investigating the phone hacking scandal in the UK. Along the way many big names saw data breaches in 2011 including Citibank, Epsilon Marketing, RSA, Google, and Sony.

News in the News

2011 was the year that the news made the news. We started the year talking about classified information finding its way into the press and ended the year with an inquiry into the press. Alleged offences under section 55 of the Data Protection Act are at the heart of the phone hacking scandal.
Image: © Claudia Paulussen - Fotolia.com

Back in January Excelgate reported that following the ongoing WikiLeaks saga, it was necessary to have a coordinated global response across governments if a repeat of such a massive leak of classified material was to be avoided.  The industry is just coming to terms with the ramifications of WikiLeaks, and at the East West Institute in London in June, it was recognised that there needs to be greater co-ordination in the future, not just between technical and legal practitioners and politicians, but also across geographies in drafting new data protection and privacy legislation in the future.

2011 saw the launch of some new standards and assurance schemes in the UK. In March, ISSA UK launched ISSA 5173 – a new security standard aimed at improving information security for small and medium-sized enterprises. I’ve been involved in the formation of the standard since the foundation of the workgroup back in 2010, and in the collation of feedback since the publication of the draft standard in March. To date feedback has been overwhelming positive, as well as being well received by the IT press. 2012 will see the publication of a series of guidance documents to accompany the standard. 2011 also saw the launch of CESG’s Commerical Product Assurance, the replacement product assurance scheme for CCTM for security solutions in the UK public sector. It aims at providing two levels of assurance for security solutions depending on where the proposed solution is to be used. This should open up the market for the relatively closed space of IL3 (for example local authorities) by encouraging competition and innovation in this space.

August saw the rioting across London and other parts of the UK. Social media was widely reported to have been used by some rioters in coordinating some of the riots. As reported in June, the misuse of social media comes in a number of forms: from criminal acts to damaging reputations – both corporate and private. In November, speaking at the London Conference on Cyberspace, The Foreign Secretary, William Hague, whilst warning against the dangers of government censorship with regards to the use of social media, did state that global, coordinated action is needed to deal with social media misuse and cyber attacks. In short he said “that behaviour that is unacceptable offline is also unacceptable online, whether it is carried out by individuals or by governments.”

The Levenson Inquiry, investigating the culture, practice and ethics of the press started its formal evidence hearings in November. At the heart of the phone hacking scandal is that somewhere along the chain personal data was potentially procured illegally. The What Price Privacy? report of 2006, published by the Information Comissioner’s Office already highlighted there was widespread abuse of the Data Protection Act in the UK and recommended a custodial sentence for Section 55 offences. As reported in my interview with Christopher Graham in August, the Information Commissioner will continue to push for custodial sentences for the most serious offences.

The message of the What Price Privacy?  report has somehow become lost and focused on journalists rather than the illegal trade in personal data. I certainly don’t want to see restrictions placed upon a free press: this is the foundation of any democracy. Wrong-doing and fraud should be exposed. There is already a large carve-out of exceptions in the Data Protection Act for journalistic purposes. The trade and procurement of personal data is already illegal, however, and in my view the law is not being enforced severely enough – either in severity of sentence for the most serious cases, or in the number of prosecutions.

I have been impressed with the ICO’s response to date with regards to education and the issuing of fines. They have taken a very reasonable position: that people need education as to how to comply with the Data Protection Act. It would indeed be foolish to adopt a mass fining policy of organisations when the powers of the ICO to issue a monetary penalty notice only came into effect in April 2010. On the other hand, April 2012 will mark the second anniversary of this and to continue indefinitely in this mode would be, in my opinion, be a mistake. I certainly don’t want to see a situation where every single human error results in a fine from the ICO. The Data Protection Act is now, however, a 13 year-old piece of legislation, and organisations have now had two years to ensure they comply with the law in this regard. If HMRC operated on a mainly notice to improve basis, I’m fairly sure we’d see a significant decline in tax revenues (i.e. non-compliance). They don’t however, and operate a sliding scale of penalties depending upon the severity of mistake/ non-compliance. For mistakes on VAT returns, for example, penalties are categorised as follows:

  • Careless : you failed to take reasonable care
  • Deliberate: you knowingly sent HMRC an incorrect document
  • Deliberate and concealed: you knowingly sent HMRC an incorrect document and tried to conceal the inaccuracy

In each category there is also a sub-category: promoted and unprompted (i.e. HMRC discover the discrepancy or you notify HMRC of it). The fines vary from 100% of the tax due for deliberate and concealed and prompted to 0% for careless and unprompted. I’d like to see a similar sliding scale defined for data protection offences and then these enforced, since I simply don’t believe there has only been 7 serious data protection offences in the UK since April 2010. I’ve come across more than that myself this year alone, ranging from the potentially criminal to the careless error.

I’ve also, over the course of 2011, become convinced of the need for a mandatory data breach notification laws for the UK, as is already the case in some US states. The naysayers are already out in full force in the UK, saying the UK doesn’t need another piece of legislation regulating business. It is worth bearing in mind that this legislation originated in California – that US state well-known for over-regulating businesses and stifling innovation -not! Similarly, the criticism of data breach notification laws is not based upon any real-world experience. A study from the University of California-Berkeley of views from CISOs in the US, showed that data breach notification laws has put data protection and information security firmly into the public eye, and actually fostered dialogue in some cases between the consumer and data controller regarding their data. It also empowers consumers to protect themselves, either by asking awkward questions of their data controllers or by simply shopping elsewhere. We need this raising of awareness and dialogue in the UK too. Why should we either trust or trade with an organisation that doesn’t safeguard our privacy?

Phil Stewart is Director, Excelgate Consulting & Secretary & Director, Communications for ISSA-UK


The Apple of my i

December 7, 2011

As 2011 draws to a close, inspiration for the way forward in the information security industry is drawn from the past and a man outside of it: Apple’s former CEO – Steve Jobs, who passed away on 5th October, 2011.

Apple: Think different

Apple’s iconic logo is as well known as their products are globally today. Many urban myths have sprung up over the years over the origins of the Apple logo: from the apple of knowledge from the Garden of Eden; that the logo resulted from Steve Jobs having worked in an apple orchard; to the death of the founder of computer science, Alan Turing, by biting an apple laced with cyanide. An interview with the logo designer, Rob Janoff in 2009 revealed that none of these were the inspiration for the bitten apple – it was purely a design decision to give the logo scale and make the apple instantly recognisable as such as compared to other fruit, such as a cherry. This version of the logo was used by Apple from the mid 1970s until 1998; the “Think Different” slogan was retired in 2002.

It was interesting to see the huge number of tributes for Steve Jobs, when he passed away in October this year. Has there ever been such a response for the death of a technologist? Steve Jobs’ death produced tributes from people from every walk of life and all corners of the globe: from Presidents to Prime Ministers; from rock stars to every day users of their products. Steve Jobs was a special person, who instinctively knew what his customers wanted – without asking them. He saw the value of technology as an enabler: in improving people’s lives- but he absolutely understood – that most people are not fascinated by the technology in itself but what it can do for them in their everyday lives (in the same way the motor car was an enabler in the last century: few people are concerned with how they work).  Apple’s products all have an underlying intuitiveness about them that really does allow their users to unplug and use them straight away– without assuming any technical knowledge hitherto.

Most of us, however, do not possess that instinctive insight that Steve Jobs’ had – we have to do our market research with our target audience first. Historically, many information security professionals will not engage with end users at all – since these are the people -they believe – will ask for things they can’t have and do all the things they don’t want them to do.

Some years ago I was brought into an American Investment bank as project manager, on a desktop platform refresh program that had previously failed miserably in its objectives. Successive project managers had come and gone on that project, and it always seemed a case of one step forward and two back previously. Typically – for the IT department – the team were kept out of sight and out of mind in the lower dungeons of the bank: safely away from daylight and customers. There wasn’t even a telephone for the team – all communication had been done via email previously!

I decided that as well as talking to the head of each business unit to determine what was their requirements were, I would also – shock horror!- talk to the end users of each team to determine and capture which applications they used and how they used them in their day to day tasks. I also initiated a series of end user tests, and ensured that a representative from each team came down and tested the applications before the desktop builds were approved and shipped back to the respective end-user desks. When business managers asked why we would be asking for 30mins –of one staff member’s time for testing, I explained to them that this was improving staff productivity, by reducing helpdesk calls and eliminating the need for recalled failed builds. This strategy payed off: not only did we retain this business we went onto to win further rollouts for other parts of the bank.

This year I’ve heard and seen things which beggar belief. A consultant proudly boasting that the organisation he was contracted to work for “deserved data breaches” because “their staff were uneducated” (worse still, he meant this in a generic sense, not specifically to information security best practice education). I’ve also heard all security vendors being branded as “snake oil vendors”. An interesting concept – I don’t think we’d have much of an industry without security vendors, and I’ve come across one or two unscrupulous practitioners in my time who have a scant disregard for data privacy themselves to whom the disingenuous adjective could easily apply.

Whilst there certainly are some security vendors around to whom the adjective “snake oil” can easily be applied to (and a reputable re-seller recently reminded me of one): those that have little respect for their customers’ product feedback; who are in the business purely to make money without advancing genuine information security; and whose products are so desperately clunky to use that they require reams of documentation to use them; that greatly reduce user productivity and encourage their end users to find a workaround, and thus bypass security policy. Equally, however, there are some innovative vendors on the market that are genuinely interested in advancing information security, by helping develop new standards; thinking of helping the SME community by taking away the laborious task of log oversight from them and outsourcing it to specialists; or helping to secure the use of the cloud. I’ve come across all these types of vendors too this year. To label all security vendors in the same fashion is not only disingenuous to all vendors but also rather childish.

Earlier this year when I interviewed the UK’s Information Commissioner, Christopher Graham, for the ISSA, he remarked how he felt that end users were just not getting the message regarding data protection.  Too often we see the same old problems: users not being educated, making basic mistakes. Personally, I think we have an industry that’s geared up for messaging aimed mainly at board and manager level and around legal compliance, so is it any wonder? Who is teaching the end users how to handle personal data correctly, and what should and shouldn’t be stored regarding credit cards on a day to day basis in their jobs? Similarly, I’ve always disliked the industry term “evangelist” – widely used in our industry – since that implies preaching! Who on earth likes being preached to? Perhaps that’s why few end users are listening.

We urgently need an approach where information security professionals think about being business enablers, whilst enhancing security, and can talk in a language that their end users understand. For twenty years plus now, we’ve been thinking that all our problems will be solved if only we throw more technology at it. Yet still we see data breaches. Similarly, we need security products that are focused at improving end user productivity, rather than working against the business. Then users might stop looking for workarounds, to both the solutions and hence their security policy.

If only Apple did iSec!

Phil Stewart is Director, Excelgate Consulting  and Secretary and Director, Communications for ISSA UK.


Stemming the Tide of Data Breaches

July 28, 2011

2011 has seen a steady stream of attacks and data breaches at a whole host of well-known, large organisations: Sony, Citibank, RSA, Google, Epsilon Marketing, and more recently, the Pentagon. Why are we seeing such a steady stream of major information security breaches at large organisations? Who can we trust with our data? What can be done to remedy the situation?

Data Stream, Data Breaches, Binary Data

2011 has seen a steady stream of data breaches at large organisations: from Sony to Citibank, to RSA, Google and Epsilon Marketing and the Pentagon. Lockheed Martin also thwarted an intrusion attempt into their network to steal data.
Image: © Junede - Fotolia.com

Lack of Board Engagement

Opinion differs in the industry as to whether the fault lies with the information security industry itself in convincing board level of the need to act; or whether boards are actively engaged with information security professionals in the first place. Harvey Nash’s CIO 2011 Survey published earlier this year generated responses from 2,000 CIOs and industry leaders worldwide. Its findings showed that 50% of respondents sit at the operational management or board level of their organisation – meaning that the other half do not.

When Sony’s breach involving 77m Playstation user details came to light, it was subsequently revealed that Sony did not have a CIO/CISO at the time, and indeed for a number of years preceding the attack. It’s not just sufficient for board visibility of information security but also that the CIO has genuine influence and is able to raise awareness to the board and influence decisions on a regular basis. Good information security awareness starts from the top : you cannot expect your employees to have awareness of an issue that has no visibility at board level.

Taking a Holistic View

It is important for every organisation to take a holistic view of information security (technology, processes, people) and not focus on a sole product, standard or the “next greatest thing” and believe that all will be well. The culture of chasing the current standard or solution, currently in vogue, has been largely vendor-driven, where advice is given with a heavy slant towards their own solutions as the sole solution. I’m keen on vendors and professionals who see the bigger picture beyond their own product or standard or area of expertise and are keen to educate, or in those vendors who, in developing their own solution are keen to help improve upon existing standards. I’ve been particularly impressed with Sophos in this regard, who regularly give out advice via their security blogs on a whole host of issues ranging from Facebook scams, phishing attacks, credit card scams, and botnets. They are to be commended not only for taking a holistic view of the industry but commenting on the various social media scams for which social media users clearly need educating. It also did not surprise me that Sophos won the award for best speaker at the recent ISSA event onboard HMS President.

To illustrate why a holistic approach is important – for years Intrusion Detection & Intrusion Prevention systems for years has been sold as the panacea for detecting all malicious intrusions on your network. Without proper examination and collation of these logs on a regular basis – having IDS / IPS alone is nearly as bad as no IDS/IPS at all. Consider a “go-slow” attack where an attacker tries to gain access using 2 logon attempts per hour. Typically, this would not trip either an account lockout situation or an IPS detection – it needs some intelligence behind it to raise that alarm (be that human or automated – via SIEM ). For a large organisation, do you really expect a human operator to sift through thousand of event logs looking for a needle in an electronic haystack? Are you properly and intelligently monitoring your logs and can you take evasive action quickly should you come under attack? (rather than after the data has bolted, as is frequently the case).

Avoiding the “Checkbox Culture” that Standards Compliance Alone Generates.

There’s too much noise and focus on standards compliance in the industry in the mistaken belief this alone will generate security. It doesn’t: when taken in isolation, it generates a false sense of security. Information security cannot be seen as an annual tick box event, with a string of recommendations and good intentions: to be done at some later date.

Whilst standards compliance is a necessary part of good governance, the industry really should be talking about generating good security cultures. An interesting study would be of those companies mentioned above (and others) which have suffered a breach, the percentage which had recently undertaken compliance with a particular standard, combined with whether they have a CIO/CISO in place and had regular staff training in place. It would make interesting reading.

A security culture is something that starts in an organisation from top-down: the board is updated at regular and frequent intervals about what is being done across the organisation – what business processes need improving and what staff education programmes are in place or are being updated. CIOs / CISO should be constantly improving their skill sets and awareness by attending conferences, reading the latest security articles and being aware of innovative solutions that challenge the established way of thinking in the industry. The human factor – and education of staff is an area that is often overlooked: in the Information Security Breaches Survey of 2010 by PWC, it showed that 80% of large organisations reported an incident caused by staff, yet very rarely do we hear of the need to regularly educate users. It simply isn’t good enough to keep blaming staff if you don’t have a regular training programme in place. It also isn’t good enough to “educate and forget” i.e. only train when a new person joins an organisation and never again – there needs to be a programme in place to educate users at regular intervals – to accommodate new threats, changes to legislation and best working practices.

Taking Professionalism Seriously

It’s long past time that our industry took professionalism seriously. Think of a visit to a doctor or a surgeon performing an operation. Would you let a surgeon operate on you who hadn’t bothered to attend medical school or didn’t think the exams were “really that important or necessary”? There certainly are some bad doctors out there, but the reverse argument of not bothering with professional qualifications to practice in the medical profession doesn’t hold water. Yet that is exactly what a small portion of our industry is doing!

As if to illustrate the point, last week I heard a hilarious story from a journalist who told me how he had uncovered someone who had been blagging their way around the industry as a “security consultant” (incidentally the journalist has given permission for me to repeat this, and the information was obtained via publishable sources and not via phone blagging or phone hacking!). Not only did the ‘consultant’ have no information security qualifications or certifications, but he had previously been working in… the hair products industry!  After containing our laughter in the restaurant, I remarked: “Securing hair braids yesterday, securing data tomorrow!” It would be a funnier joke if it wasn’t happening in our industry.

Would the medical profession allowed unqualified staff?

“Don’t worry sir, I’m fully unqualified! I’m good with people though and ...... I used to cut hair for a living, so you have nothing to fear. How hard can this medical thing be?”
Image: © Joel Calheiros - Fotolia.com

The industry needs to think about standardising on acceptable criteria for practicing in this field. I would propose that people wanting to employ someone in our industry insists on a CISSP certification as a minimum benchmark certification, as it demonstrates many of the areas previously discussed – such as taking a holistic view, relevant experience, and it encourages and requires constant improvement and education. It also demonstrates a commitment to the industry. Getting information security wrong can have a really serious impact on your business, and it certainly isn’t about just selling security solutions as a quick ‘fix’. I would also urge organisations to ensure that people at all levels are qualified – from CISO/ CIO & CTO down: it’s not good enough to ensure your junior staff are qualified whilst your security leaders are not – lead by example.

Changes to Legislation Are Required

I both welcome and support a change in the law to include mandatory breach notification for the UK – as is already the case in US states such as California. I would like to see also as part of filing a company’s annual accounts or statutory annual return a list of security measures they are taking / will be undertaking to safeguard personal data in their organisation. If there’s a statutory requirement to report annual financial accounts, why not something (albeit more sophisticated) in place for information security as well? If people have to sign off on security measures that subsequently turn out to be false or inadequate (and face subsequent prosecution), it may just make boards wake up that inaction is not an option and that people’s data and privacy is something we value as a society. Granted, this alone isn’t going to be a panacea or an easy thing to legislate (a one size fits all policy for all organisations is not appropriate in terms of their obligations – but then we already accommodate different annual accounting requirements with the Companies Act of 2006 for different sized organisations).

Without enforcement, legislation alone is unlikely to succeed in changing culture. I would also like to see stiffer penalties for breaches of section 55 of the Data Protection Act enforced in the UK. In May 2006, the Information Commissioner in the UK published a report “What Price Privacy?” which uncovered the illegal trade in personal information, with a follow-up report published 6 months later. The act of blagging, (which has been the root of all the problems with the phone hacking scandal) is a criminal offence under section 55 of the Data Protection Act 1998. Currently, however, it carries a fine of up to £5,000 in a Magistrate’s Court but does not include a custodial sentence. Whilst Section 77 of the Criminal Justice and Immigration Act 2008 (CJIA) subsequently included provision for a custodial sentence of up to two years, this provision cannot come into effect until the Secretary of State makes a relevant order. Whilst the recent phone hacking scandal has focused on some journalists using blagging to obtain personal information, the “What Price Privacy?” report of 2006 showed the practice is far from confined to the journalism sector – this is just the visible tip of a much bigger iceberg lurking below – and I urge people to read that report from the ICO.

To conclude, whilst I don’t share the pessimism of some in the industry by the same token I’m not complacent either in thinking that there isn’t much still to be done: both in changing and enforcing the law, educating both board members and employees and ensuring the industry thinks holistically as individuals and organisations.

Phil Stewart is Director of Excelgate Consulting and Director of External Communications, ISSA UK