The 12 Days of Cyber Christmas

December 19, 2014

As 2014 draws to a close, we will not make any predictions for 2015. We’ll leave that to astrologers and TimeLords (and we don’t believe in either!). What we can do, however, is learn from the past. As an industry, we haven’t been good at educating others outside our industry of the key things they should be doing, yet the same basic security measures ,either poorly implemented or missing completely, pop up time and again. So here is our take on the 12 days of Christmas: The 12 days of Cyber Christmas: 12 essential security measures organisations should be taking, both large and small.

Cyber Tree, 12 days of Christmas, Cyber Security, Patridge in a Cyber Tree

On the 12th day of Christmas,
Cyber Security sent to me:
12 remotes removing
11 policies persuading
10 defaults denying
9 apps a patching
8 coders coding
7 keys safeguarding
6 risks a weighing
5 unsolicited pings
4 complex words
3 Varied Pens
2 console dumps
And No View access to SNMP

On the 12st day of Christmas, Cyber Security sent to me: 12 remotes removing

It’s truly shocking how many accounts have remote access to resources which don’t require it. Think at about this from both a technical and “need to know” business case perspective. On the technical side, do you really need Remote Desktop (in Windows) via RDP. If you conclude you really do, make sure to remove the local admins account from RDP and have specific user accounts with strong passphrases (see the 4th Day of Christmas). In the United States Senate Report on the 2013 Target breach, published earlier this year, on of their key findings was that Target had given remote access to a third party vendor which did not appear to be following information security practices.

11 policies persuading

As we reported in September, organisations large and small need to work in security policies, user training and quarterly security KPIs, monitored by their HR teams. If you are looking for two areas to start then physical security and social media usage should be two good ones to start with in early 2015. It’s simply too easy to get into some organisations on the physical side; whilst we’ve seen numberous politicians and would-be politicians forced to resign following faux-pas on social media in recent months.

10 defaults denying

The constant revelations from Edward Snowden in 2014 remind us all of the dangers of giving too much access to data to one individual in the organisation. According to the Verizon 2014 Data Breach investigations report, 88% of security incidents were caused by insider misuse, either accidental or malicious. Review your access permissions to data too, and ensure you have appropriate technical controls in place to both grant access to data to only appropriate personel, and ways of auditing and tracking access to data. I’ve really been impressed with Vormetric’s Data Security Platform which specifically addresses this issue, using file level encryption and key manangement.

9 apps a patching

According to Secunia’s Vulnerability Review 2014, “the more widespread a program is, and the higher the unpatched share, the more lucrative it is for a hacker to target this program, as it will allow the hacker to compromise a lot of victims”. This is reflected in their research, where Microsoft IE (99% of market share) had 270 known vulnerabilities (at their time of publication); whereas Apple Safari (11% market share) had 75 known vulnerabilities. They also discovered 33% of users running IE were using unpatched browsers; and 14% of those users running Apple’s Safari browser was unpatched. Ensure you are patching all your applications – in-house and third-party regularly for 2015!

8 Coders coding

If you are developing any applications in-house applications, be sure to encourage secure coding best practices. A wealth of information is available on this subject, the Microsoft Security Development Lifecycle which is a good starting point.

7 Keys Safeguarded

In the eBay breach announced on 21st May 2014, hackers stole personal information of 145 million active users of eBay, after they compromised the database containing encrypted passwords. This suggests that their wasn’t proper key management in place to safeguard the data. Remember never to give user access to keys that isn’t required. Encryption of data, both in-house and cloud, along with key management is handled very efficiently with Vormetric’s Data Security Platform.

6 Risks a weighing

In any SME to large organisation, your organisation should be regularly reviewing the risk. In order to determine where your resources are spent mitigating the risk, you need to look at the costs of the assets you are protecting, the costs of the risks of a breach, the legislation you must be compliant with and decide where best to spend your resources. A good starting point for risk management framework is ISO 31000.

5 Unsolicited Pings

When was the last time you did a network/ asset discovery exercise on the network? If you deal with credit cards data you have a requirement to do a quarterly network vulnerability scan. Can you account for all the devices discovered on your network?

4 Complex Words

If there’s one message we need to get over to end-users in eCommerce in 2015, it’s the avoidance of using default, vendor-supplied passwords, and weak passwords. As the ICO warned us this November, with a Russian website providing live footage from thousands of webcams across the UK, we should never use vendor-supplied passwords on our devices. Make sure all your devices on the network are strong passphrases, comprising of not just word, but phrases, containing numbers and special characters. An example would be: MerryChr1stmas2You!   Remember also to use different phrases per device/ logon account.

3 Varied Pens

It’s good security practice to change your penetration tester regularly: the black art of ethical hacking is partly creative and intuitive: what one pen tester finds, another may not necessarily. People do have their favourite methods and utilities, and overtime, if you keep re-using the same penetration testers, you may find you get the same results. Change your penetration testers regularly so that no stone is unturned looking for vulnerabilities. In addition, as well as having a human element, mid-tier to large organisations should also invest in some automated threat analysis. This year we’ve been very impressed with the offering from Alien Vault, with their large OTX database and threat analysis.

2 Console Dumps

One of the key findings in the 2014 Senate Report on the 2013 Target breach, affecting 110m credit card numbers; was their failing to respond to multiple warnings from the company’s anti-intrusion software. Ensure your SIEM is properly configured, and that you have the right resources to monitor these logs in real-time and on a proactive basis.

And no View Access to SNMP

SNMP (Simple Network Management Protocol) is used in systems management to monitor devices such as routers, switches, and printers. On your systems management consoles, it enables the agents to report the status of IP devices. Unfortunately, a large number of these SNMP v1/2 configurations allow each user to view an entire, unrestricted view of the entire agent tree, which could then be used to provide sufficient credentials to obtain read-write or admin level SNMP access.

Wishing all our readers a Merry Christmas and Happy & Prosperous 2015.


What Price, Privacy?

October 15, 2014

At IP Expo last week in London, the inventor of the World Wide Web, Tim Berners-Lee, outlined his vision for the future of the web, and data ownership in his keynote speech.

internet, web commerce, ecommerce, internet, future, connectivity, global connectivity, data sharing, data

© Victoria – Fotolia.com

Over the past few years there have been numerous concerns about big data and its use. Tim Berners-Lee argues that whilst he is perfectly happy for his personal data to be used by professionals where it is of benefit to him or mankind (e.g. healthcare professionals in a road traffic accident), this does not apply to commercial organisations for the purposes of targeted marketing. Tim argues that the owners of personal data are the data subject, not large corporations, and that we should have a say how it can be used and where it can be sold or merged with other data sets.

Others take a different view. An interesting panel discussion with Raj Samani, Andrew Rose & Josh Pennell ensued later in the conference. There was disagreement amongst these panelists whether the ‘reputational damage’ argument regarding data loss actually meant anything these days to organisations, since data breaches are seemingly ubiquitous and those outside the industry, especially the Facebook generation, simply don’t care if their personal data is lost. Time will tell if there is long-term damage to Target, for example from their breach, although early indications appear to show that the share price of Target has mostly recovered.

Earlier in the month, Raj Samani had argued in a webcast to Forbes, that as a consumer, he gives consent for his personal data being used and shared when he signs up for services, and that most consumers will happily do so, given there is consent, perceived value and transparency. After, all, services such as Twitter and Facebook are free for the end-user to use. Raj does concede however, that the terms and conditions are rarely clear, being large legal documents, and that few end users will actually read them. There rarely is effective transparency in these cases, if end-users do not realise what they are signing up to.

How Tim’s proposal might work in practice would be to change legislation and afford personal data the same status as copyrighted data: use and context requires specific owner consent and re-use requires a royalty paid to the data subject. It may also solve the forthcoming conundrum about the “right to erasure” in the new EU Data Protection legislation: if I ask to be removed from one database, in theory the deletion should cascade into third party databases, where the data had been sold on. It would also take away some of the burden from over-worked regulators like the ICO, who are severely under-resourced.

I’m sure many organisations will say such a model is unworkable, but it may just make organisations think about how and where they use our personal data, especially if over-use of personal data directly affected bottom line via royalty payments. 40 years ago, in the Cold War era, if you had suggested an interconnected network of disparate computers upon which banking, finance, government, education, retail and leisure all relied on, which would become both ubiquitous and intrinsic to our daily lives, the idea would probably have been dismissed as the work of science-fiction and unworkable. Yet in 2014, we rely upon the internet to sustain all these things and more. Our industry needs such radical thinking today for a new model of data protection.

Phil Stewart is Director, Excelgate Consulting


Compliant But Not Necessarily Secure? How HR Can Help..

September 25, 2014

As the recent Target breach has shown: being compliant alone does not necessarily mean inherently secure. As a recent analysis of the theft of personal and financial information of 110 million Target customers by the United States Senate illustrates, key warning signals that Target’s network was being infiltrated were missed in the organisation. Both anti-malware and SIEM identified suspicious behaviour on their network, yet two days later the attackers began exporting customer data from their network.

HR - business functions, business operations, business processes

© Rawpixel – Fotolia.com

The fact that Target was certified as being compliant with the PCI DSS standard just two months before the breach took place has certainly caused a lot of debate in the industry. I’ve always argued that simply persuing an annual ‘checkbox exercise’ is not enough: there must be a lasting and on-going cultural awareness with regards to data security.

I’ve also long argued that whilst responsibility and drive for data security lie at board level; the entire business needs to be both aware and on-board too. That’s why HR is vital to achieving the role. KPIs for data security, along with employee training programmes need to enhance the goals of the CISO. Data security KPIs should be SMART: Specific, Measurable, Achievable, Realistic and Time-orientated. Across all business units there should be at least annual (and perhaps quarterly, depending upon the nature of the business) employee awareness training programmes.

The KPIs should be pertinent to the area of the business unit they operate in and relevant to the data security standard your organisation is working towards gaining (or maintaining) being compliant with. For PCI DSS, for example, the requirement to ‘Protect Cardholder Data’ clearly has different implications for different business units. For a call centre, for example the requirement should be around not storing PANS in a readable format; not recording CV2 numbers when taken over the phone; not writing cardholder details down on bits of paper for subsequent entry; and the secure physical storage of paper records. For development, this would pertain to: encryption methods used and key storage and protection; how PANs are rendered unreadable electronically and adherence to the standard; how cardholder data is transmitted across networks. For IT support, these could relate to maintaining up to date anti-malware and intrusion prevention systems; hourly monitoring of SIEM information; weekly reports to senior management concerning monitoring status and patch levels.

Whilst line managers are usually responsible for setting KPIs in a business, I strongly believe HR can make a valuable contribution: by enforcing company policy in ensuring all line managers include data security quarterly KPIs in their target setting. A function of HR in a business is to safeguard the reputation of a business, something which is also the function of an information security professional.

A recent study by insurance firm Beazley analysed more than 1,500 data breaches it services between 2013 and 2014, and discovered that employee errors accounted for 55% of them. HR is also about people management and defining the culture of an organisation. Whilst I’m not suggesting that HR take on responsibility for information security, they certainly have a part to play in ensuring that the correct mindset operates across all parts of the business. Changing an organisation’s culture requires a sustained effort from numerous players across the business: and HR is one of the key ones.

Phil Stewart is director of Excelgate Consulting.


Bring Your Own Pandora’s Box

May 1, 2012

Bring Your Own Device (BYOD) seems to spreading in the industry as a debating point in the industry like a virulent virus. Is it fast becoming the 21st century version of the “Pandora’s Box” for the workplace?

Bring Your Own Device - BYOD = Bring Your Own Pandora's Box

In ancient Greek mythology the “Pandora’s box” was a box which contained all the evils of the world. Compelled by her own curiousity, Pandora opened it, and all the evil contained therein escaped and spread all over the world. Is “Bring Your Own Device” (BYOD) the 21st Century equivalent?
Image: © foaloce - Fotolia.com

I’ve long been an advocate of information security professionals adopting a more flexible approach when giving end-users the functionality and flexibility they want: a happier user is a more productive user. Indeed, a recent study published by O2 last month [1], the largest ever flexible working pilot of its kind in the UK, found that workers were more productive, happier and richer if they did not go into the office.

Users want to experience the functionality, applications and interfaces that they use and are familiar with in their private lives. They want devices that allow them to work remotely from outside the office and on the move. All things can be taken too far, however; and this is exactly what is happening with BYOD. It seems that some organisations appear intent in throwing out the corporate security (baby) out with their data (treating it as bathwater).

BYOD should be treated with the same three basic principles as with any other information security issue: confidentiality, integrity and availability. Vast sums were spent on secure laptop rollouts in the noughties in order to give workers the ability to work from home and on the road securely. Did we ask/ allow users to bring their own home laptops (along with their malware!) and just plug them into the corporate network and hope for the best? Of course we didn’t!

Some good reasons why BYOD may not be such a great idea:

  • What happens if the device is lost or stolen from the end user? Along with the device goes (unencrypted) corporate data and contacts and an inability to remote wipe/ kill it
  • Removable media is easily targeted by would-be remote hackers as an easy way of bypassing network firewalls: it the assumed route for example for the Stuxnet virus for example.
  • Smartphones usually nowadays have both voice recording and high definition camera capabilities: easily allowing for industrial espionage. There’s good reason most high-security organisations don’t allow them on their premises.
  • There are legal ramifications in most geographies in monitoring employee’s personal web behaviour.
  • Laws on employee monitoring of personal data/ behaviour varies from geography to geography: this is a potential headache for corporate risk departments.
  • How will you monitor the mobile devices for malware, and prevent its introduction into the corporate network?
  • How will you identify which device is ‘rogue’ and which is ‘approved’ in a true BYOD environment?
  • How do you monitor for acceptable use on an employee-owned device? (assuming there are no legal blockers)
  • How will you prevent the employee walking out the door with your customer and supplier contact lists?
  • How do prevent illicit/unlicensed content from entering the corporate network?
  • Which devices are deemed to be supported and unsupported?
  • Does your IT department have the right skills to support cross-platform mobile applications in a secure fashion?

The more enlightened organisations are of course doing a sensible compromise: rolling out well-known tablets to their workforce in a controlled manner, whilst not allowing private devices to access the corporate network. According to Apple, 9 months after they launched the iPad, 80% of Fortune 100 companies had either deployed or were piloting the device. [2]. When deploying mobile devices you should consider all the areas you would have done for a laptop device and more:

  • How will the data on the device be encrypted?
  • How will you transmit data encrypted to your internal network and external service providers?
  • How will you ‘remote kill’ the device should it be lost or stolen?
  • How will you prevent malware and inappropriate content entering the network.?
  • How will the user authenticate securely onto both the device itself and also the corporate network?
  • How will you identify vulnerabilities on the mobile Operating System?

It is perhaps my last question which causes me the most concern, since some in the industry appear to think that certain mobile platforms are not open to the vulnerabilities and exploits in the same way as their desktop and server counterparts. A recent mystery shopper exercise has confirmed my opinion: most do not take the threat posed by mobile devices as seriously as they should. The SANS Institute think otherwise: a whitepaper published them by in 2011 [3] stated that whilst Apple has done a great job by allowing only digitally signed software to be installed on a non jail-broken iPhone; there were still vulnerabilities that could be exploited, particularly in the web-browser (Safari).  Recently this year Reuters reported on a flaw on the operating system of the Android operating systems which allowed hackers to eavesdrop on phone calls or monitor the location of the device. [4]

In order to reduce the risk posed by BYOD, organisations should develop an acceptable usage policy for tablets which clearly identifies the steps end-users should take to complement the security that has been built-in to the device by the IT department. This should include sensible advice on the dangers of allow shoulder-surfing when logging on in a public area; ensuring the device is not left in a public place; regular network connectivity to the corporate network to allow regular updates.

The sensible compromise of rolling out familiar, but secured, tablets in the workplace combined with an acceptable usage policy, will prevent your employees seeing the management as “tone deaf” and may well realise significant increases to both end-user productivity and creativity in the workplace; whilst maintaining a sensible approach to safeguarding against the unguarded use of mobile devices in the workplace.

Phil Stewart is Director, Excelgate Consulting & Secretary & Director, Communications, ISSA UK

Sources:

[1] Working From Home More Productive, Telegraph 3rd April 2012:
http://www.telegraph.co.uk/technology/news/9182464/Working-from-home-more-productive.html

[2] Morgan Stanley: Tablet Demand & Disruption:
http://www.morganstanley.com/views/perspectives/tablets_demand.pdf

[3] SANS Institute: Security Implications of iOS:
http://www.sans.org/reading_room/whitepapers/pda/security-implications-ios_33724

[4] Android Bug Opens Devices to Outside Control, Reuters, 24th February 2012
http://www.reuters.com/article/2012/02/24/us-google-android-security-idUSTRE81N1T120120224


2011: The Year that News Made the News

December 15, 2011

2011: A year that started with the continued aftermath of the WikiLeaks saga, and ended with the Leveson Inquiry investigating the phone hacking scandal in the UK. Along the way many big names saw data breaches in 2011 including Citibank, Epsilon Marketing, RSA, Google, and Sony.

News in the News

2011 was the year that the news made the news. We started the year talking about classified information finding its way into the press and ended the year with an inquiry into the press. Alleged offences under section 55 of the Data Protection Act are at the heart of the phone hacking scandal.
Image: © Claudia Paulussen - Fotolia.com

Back in January Excelgate reported that following the ongoing WikiLeaks saga, it was necessary to have a coordinated global response across governments if a repeat of such a massive leak of classified material was to be avoided.  The industry is just coming to terms with the ramifications of WikiLeaks, and at the East West Institute in London in June, it was recognised that there needs to be greater co-ordination in the future, not just between technical and legal practitioners and politicians, but also across geographies in drafting new data protection and privacy legislation in the future.

2011 saw the launch of some new standards and assurance schemes in the UK. In March, ISSA UK launched ISSA 5173 – a new security standard aimed at improving information security for small and medium-sized enterprises. I’ve been involved in the formation of the standard since the foundation of the workgroup back in 2010, and in the collation of feedback since the publication of the draft standard in March. To date feedback has been overwhelming positive, as well as being well received by the IT press. 2012 will see the publication of a series of guidance documents to accompany the standard. 2011 also saw the launch of CESG’s Commerical Product Assurance, the replacement product assurance scheme for CCTM for security solutions in the UK public sector. It aims at providing two levels of assurance for security solutions depending on where the proposed solution is to be used. This should open up the market for the relatively closed space of IL3 (for example local authorities) by encouraging competition and innovation in this space.

August saw the rioting across London and other parts of the UK. Social media was widely reported to have been used by some rioters in coordinating some of the riots. As reported in June, the misuse of social media comes in a number of forms: from criminal acts to damaging reputations – both corporate and private. In November, speaking at the London Conference on Cyberspace, The Foreign Secretary, William Hague, whilst warning against the dangers of government censorship with regards to the use of social media, did state that global, coordinated action is needed to deal with social media misuse and cyber attacks. In short he said “that behaviour that is unacceptable offline is also unacceptable online, whether it is carried out by individuals or by governments.”

The Levenson Inquiry, investigating the culture, practice and ethics of the press started its formal evidence hearings in November. At the heart of the phone hacking scandal is that somewhere along the chain personal data was potentially procured illegally. The What Price Privacy? report of 2006, published by the Information Comissioner’s Office already highlighted there was widespread abuse of the Data Protection Act in the UK and recommended a custodial sentence for Section 55 offences. As reported in my interview with Christopher Graham in August, the Information Commissioner will continue to push for custodial sentences for the most serious offences.

The message of the What Price Privacy?  report has somehow become lost and focused on journalists rather than the illegal trade in personal data. I certainly don’t want to see restrictions placed upon a free press: this is the foundation of any democracy. Wrong-doing and fraud should be exposed. There is already a large carve-out of exceptions in the Data Protection Act for journalistic purposes. The trade and procurement of personal data is already illegal, however, and in my view the law is not being enforced severely enough – either in severity of sentence for the most serious cases, or in the number of prosecutions.

I have been impressed with the ICO’s response to date with regards to education and the issuing of fines. They have taken a very reasonable position: that people need education as to how to comply with the Data Protection Act. It would indeed be foolish to adopt a mass fining policy of organisations when the powers of the ICO to issue a monetary penalty notice only came into effect in April 2010. On the other hand, April 2012 will mark the second anniversary of this and to continue indefinitely in this mode would be, in my opinion, be a mistake. I certainly don’t want to see a situation where every single human error results in a fine from the ICO. The Data Protection Act is now, however, a 13 year-old piece of legislation, and organisations have now had two years to ensure they comply with the law in this regard. If HMRC operated on a mainly notice to improve basis, I’m fairly sure we’d see a significant decline in tax revenues (i.e. non-compliance). They don’t however, and operate a sliding scale of penalties depending upon the severity of mistake/ non-compliance. For mistakes on VAT returns, for example, penalties are categorised as follows:

  • Careless : you failed to take reasonable care
  • Deliberate: you knowingly sent HMRC an incorrect document
  • Deliberate and concealed: you knowingly sent HMRC an incorrect document and tried to conceal the inaccuracy

In each category there is also a sub-category: promoted and unprompted (i.e. HMRC discover the discrepancy or you notify HMRC of it). The fines vary from 100% of the tax due for deliberate and concealed and prompted to 0% for careless and unprompted. I’d like to see a similar sliding scale defined for data protection offences and then these enforced, since I simply don’t believe there has only been 7 serious data protection offences in the UK since April 2010. I’ve come across more than that myself this year alone, ranging from the potentially criminal to the careless error.

I’ve also, over the course of 2011, become convinced of the need for a mandatory data breach notification laws for the UK, as is already the case in some US states. The naysayers are already out in full force in the UK, saying the UK doesn’t need another piece of legislation regulating business. It is worth bearing in mind that this legislation originated in California – that US state well-known for over-regulating businesses and stifling innovation -not! Similarly, the criticism of data breach notification laws is not based upon any real-world experience. A study from the University of California-Berkeley of views from CISOs in the US, showed that data breach notification laws has put data protection and information security firmly into the public eye, and actually fostered dialogue in some cases between the consumer and data controller regarding their data. It also empowers consumers to protect themselves, either by asking awkward questions of their data controllers or by simply shopping elsewhere. We need this raising of awareness and dialogue in the UK too. Why should we either trust or trade with an organisation that doesn’t safeguard our privacy?

Phil Stewart is Director, Excelgate Consulting & Secretary & Director, Communications for ISSA-UK


The Apple of my i

December 7, 2011

As 2011 draws to a close, inspiration for the way forward in the information security industry is drawn from the past and a man outside of it: Apple’s former CEO – Steve Jobs, who passed away on 5th October, 2011.

Apple: Think different

Apple’s iconic logo is as well known as their products are globally today. Many urban myths have sprung up over the years over the origins of the Apple logo: from the apple of knowledge from the Garden of Eden; that the logo resulted from Steve Jobs having worked in an apple orchard; to the death of the founder of computer science, Alan Turing, by biting an apple laced with cyanide. An interview with the logo designer, Rob Janoff in 2009 revealed that none of these were the inspiration for the bitten apple – it was purely a design decision to give the logo scale and make the apple instantly recognisable as such as compared to other fruit, such as a cherry. This version of the logo was used by Apple from the mid 1970s until 1998; the “Think Different” slogan was retired in 2002.

It was interesting to see the huge number of tributes for Steve Jobs, when he passed away in October this year. Has there ever been such a response for the death of a technologist? Steve Jobs’ death produced tributes from people from every walk of life and all corners of the globe: from Presidents to Prime Ministers; from rock stars to every day users of their products. Steve Jobs was a special person, who instinctively knew what his customers wanted – without asking them. He saw the value of technology as an enabler: in improving people’s lives- but he absolutely understood – that most people are not fascinated by the technology in itself but what it can do for them in their everyday lives (in the same way the motor car was an enabler in the last century: few people are concerned with how they work).  Apple’s products all have an underlying intuitiveness about them that really does allow their users to unplug and use them straight away– without assuming any technical knowledge hitherto.

Most of us, however, do not possess that instinctive insight that Steve Jobs’ had – we have to do our market research with our target audience first. Historically, many information security professionals will not engage with end users at all – since these are the people -they believe – will ask for things they can’t have and do all the things they don’t want them to do.

Some years ago I was brought into an American Investment bank as project manager, on a desktop platform refresh program that had previously failed miserably in its objectives. Successive project managers had come and gone on that project, and it always seemed a case of one step forward and two back previously. Typically – for the IT department – the team were kept out of sight and out of mind in the lower dungeons of the bank: safely away from daylight and customers. There wasn’t even a telephone for the team – all communication had been done via email previously!

I decided that as well as talking to the head of each business unit to determine what was their requirements were, I would also – shock horror!- talk to the end users of each team to determine and capture which applications they used and how they used them in their day to day tasks. I also initiated a series of end user tests, and ensured that a representative from each team came down and tested the applications before the desktop builds were approved and shipped back to the respective end-user desks. When business managers asked why we would be asking for 30mins –of one staff member’s time for testing, I explained to them that this was improving staff productivity, by reducing helpdesk calls and eliminating the need for recalled failed builds. This strategy payed off: not only did we retain this business we went onto to win further rollouts for other parts of the bank.

This year I’ve heard and seen things which beggar belief. A consultant proudly boasting that the organisation he was contracted to work for “deserved data breaches” because “their staff were uneducated” (worse still, he meant this in a generic sense, not specifically to information security best practice education). I’ve also heard all security vendors being branded as “snake oil vendors”. An interesting concept – I don’t think we’d have much of an industry without security vendors, and I’ve come across one or two unscrupulous practitioners in my time who have a scant disregard for data privacy themselves to whom the disingenuous adjective could easily apply.

Whilst there certainly are some security vendors around to whom the adjective “snake oil” can easily be applied to (and a reputable re-seller recently reminded me of one): those that have little respect for their customers’ product feedback; who are in the business purely to make money without advancing genuine information security; and whose products are so desperately clunky to use that they require reams of documentation to use them; that greatly reduce user productivity and encourage their end users to find a workaround, and thus bypass security policy. Equally, however, there are some innovative vendors on the market that are genuinely interested in advancing information security, by helping develop new standards; thinking of helping the SME community by taking away the laborious task of log oversight from them and outsourcing it to specialists; or helping to secure the use of the cloud. I’ve come across all these types of vendors too this year. To label all security vendors in the same fashion is not only disingenuous to all vendors but also rather childish.

Earlier this year when I interviewed the UK’s Information Commissioner, Christopher Graham, for the ISSA, he remarked how he felt that end users were just not getting the message regarding data protection.  Too often we see the same old problems: users not being educated, making basic mistakes. Personally, I think we have an industry that’s geared up for messaging aimed mainly at board and manager level and around legal compliance, so is it any wonder? Who is teaching the end users how to handle personal data correctly, and what should and shouldn’t be stored regarding credit cards on a day to day basis in their jobs? Similarly, I’ve always disliked the industry term “evangelist” – widely used in our industry – since that implies preaching! Who on earth likes being preached to? Perhaps that’s why few end users are listening.

We urgently need an approach where information security professionals think about being business enablers, whilst enhancing security, and can talk in a language that their end users understand. For twenty years plus now, we’ve been thinking that all our problems will be solved if only we throw more technology at it. Yet still we see data breaches. Similarly, we need security products that are focused at improving end user productivity, rather than working against the business. Then users might stop looking for workarounds, to both the solutions and hence their security policy.

If only Apple did iSec!

Phil Stewart is Director, Excelgate Consulting  and Secretary and Director, Communications for ISSA UK.


Every Cloud Has A Silver Lining

November 23, 2011

As more organisations are looking at ways of cutting costs, outsourcing IT to the cloud makes sense from a commercial perspective. Is your company and customer data secure in the cloud however? Have you taken adequate steps to do thorough due diligence in the procurement cycle?  There may be compliance issues in rushing to the cloud you may not have considered.

Cloud Computing, Cloud With Silver Lining

The ubiquitous term “cloud computing” merely refers to applications, services or data that are managed outside the boundaries of the corporate network by a third party. Many large organisations have already outsourced applications or data storage as a way to cut costs. Many SMEs too have already widely adopted cloud computing, using it for a variety of services including web and email hosting; CRM; and invoice processing.
Image: © shutterbug - Fotolia.com

Earlier this year I was invited to attend the Cloud Computing World Forum in London. What came as no surprise is that cloud computing is already widely adopted by many large organisations. As one CISO put it so succinctly:  when your boss asks you to reduce costs by over 20% – we’ve already bought the cheaper coffee and reduced headcount, we now need to outsource our IT – both to reduce operating costs and free up valuable floorspace for other purposes. Whilst there were some good sessions in the conference on security in the cloud, what struck me is how few vendors present were focused on security.

In many ways the challenges of security in the cloud are no different to what the information security professional has always had to face: confidentiality of data; integrity of data and availability of data and services.  The three challenges I would argue that cloud computing presents that are new, however are: due diligence of suppliers to ensure there aren’t legal and compliance issues; user authentication – within the context of being managed by a third-party; and the unique threat that virtualisation plays when used in Cloud Computing.

Due Diligence of Suppliers

Before you can even consider migrating to the cloud, you need to identify and classify your data in-house. What data is customer and business sensitive data? Where is it stored currently? Are you storing personal data and/ or credit card data? Think about the implications for compliance, for example, with the Data Protection Act 1998 and PCI DSS. If you are storing credit card details and you outsource operations, you may well increase the scope of PCI DSS. Usually, for most organisations, it makes sense to outsource credit card payment transactions to a PCI DSS compliant provider. Regarding the Data Protection Act 1998 here in the UK, it is worth bearing in mind the 8th principle of the Act:

“8. Personal data shall not be transferred to a country or territory outside the European Economic Area unless that country or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data.”

As the law currently stands, at the time of writing, in the UK you remain the data controller for personal data  – outsourcing the data storage and management to a third party doesn’t change this.  You therefore need to make sure you ask appropriate questions of your proposed suppliers.  A good starting point –but not exhaustive list:

  • Where is the data hosted?
  • Where is the data replicated to?
  • What technical, physical and procedural controls are in place to protect the outsourced assets?
  • Ask whether the proposed provider is certified against any internationally recognised standard. ISO 27001 certified and PCI DSS compliant providers are helpful in this case
  • What are the local data protection laws in the country(ies) where the data is hosted and replicated?

It must be stressed that you should always seek legal advice when determining if the proposed supplier offers the same levels of protection as defined under your own jurisdiction regarding data protection.

You should also do a valid risk assessment to identify what services are business critical and what makes business sense to outsource,  and what is simply too risky or cost ineffective outsource. What are the legal implications of outsourcing the data? The International standard  ISO 31000  -risk management – principles & guidelines –  forms an extremely useful reference material for the implementation of a risk management process.

User Authentication in the Cloud

Secure user authentication is not a new challenge in itself; but when combined with a remote network being hosted by a third party it does represent some new challenges. One is for example, if the user is already authenticated internally on your network, perhaps via a directory service, can they be seamlessly recognised by the third party’s network without compromising security?  Mobile workers, increasing both in number and from a increasing variety of mobile devices, also need to be able to authenticated by the cloud provider’s network securely without in any way compromising the security of your data on their network or indeed your own network itself.

One vendor which has impressed me in this space is Ping Identity – who offer identity management software to enable Single Sign On as a service for cloud resources. It integrates both with mobile devices and web browsers and integrates with Active Directory or cloud identity providers. In addition, Ping Identity extends the capabilities of Active Directory — enabling control of user management, policies, and access, and integrates with over with 30 identity and infrastructure platforms. I was impressed with their demonstrations at the show and it is worth a look for their innovative offering.

 

Vitualisation posses a Unique Threat

In cloud computing, a program called a hypervisor allows multiple operating systems to have access to the same hardware resources. In essence the program is controlling access to these resources amongst the different operating systems. Whilst the operating system at the client (the guest OS) thinks it has full access at all times to the resources it requires, in essence what is going on behind the scenes is that the hypervisor program is carefully managing access to the host (cloud hosted) resources of processes and memory, so that each guest operating system gets the resources it requires at that moment in time, without disrupting access to the other guest systems. It is partly this principle that allows the better utilisation of resources that makes cloud computing cost effective (along with economies of scale).

One key concern – rarely addressed to date – is that malicious code could infect one customer’s machine and then spread – via the underlying hypervisor – to other customer’s machines. There was a lot of talk around this time last year of a collaboration between NC State University and IBM, of a prototype product – HyperSentry – that specifically addressed this threat, but it seems to have gone quiet recently. I hope that IBM, as well as other vendors look at ways of addressing this unique threat.

 

Cloud Security Initiatives

I have no doubt that most large organisations have both the legal and technical resource available to do an effective due diligence process, should they choose to do so. However, when it comes to SMEs, they don’t have access to in-house technical and legal resources. What is required to address these issues effectively for organisations –  is a cloud assurance scheme. There are currently two major initiatives in this space:

  • STAR – Security Trust & Assurance Registry from the CSA (Cloud Security Alliance)
  • CAMM – Common Assurance Maturity Model

In the case of STAR, CSA have created a free, online repository of documents that list the security controls by cloud computing providers who have gone through a self-assessment. The documents list a series of controls and whether or not the provider has them. CSA are currently urging all cloud security providers, large and small, to provide a complete self-assessment for publication.

CAMM’s pilot is currently in its alpha pilot phase, which aims to provide framework in support of the information assurance maturity of a third party provider or supplier (of which cloud providers are currently a major part). These will then be published in an open and transparent manner.

Whilst both initiatives are to be welcomed,  both need to address the challenges of SME due diligence (given their constraints) and the unique threat posed by the hypervisor threat. A cloud assurance model that effectively addresses these issues is definitely needed for the industry.

CSA’s STAR is available from the CSA website ; CAMM is currently undergoing its alpha pilot and more information is available at the CAMM website.

Phil Stewart is Director, Excelgate Consulting  and Secretary and Director, Communications for ISSA UK.