A legal minefield for ethical hackers: Critical analysis of the current legal and regulatory frameworks surrounding cybersecurity in the UK
This year’s Cyber Security Breaches Survey 2020 indicates that almost half of businesses (46%) in the UK have encountered a security breach or attack within the last 12 months (Johns 2020, 35). In comparison to 2019, this percentage has climbed by over 14% (Vaidya 2019, 1). In expectation of this omnipresent and ever-increasing threat, the UK government had recognised cybercrimes as a Tier One risk to UK interests and initiated the NCSSS (Government 2016, 13). In the NCSSS the government pledged to “put in place tough and innovative measures, as a world leader in cybersecurity” (Government 2016, 11).
To accomplish this vision of making the UK secure and resilient to cyber threats, the NCSSS identified various responsibilities and roles, including its own to nurture a legal framework, which fulfils the triad of protecting the nation, serving the conviction of cybercriminals and fostering cooperation between the public and private sector (Government 2016, 26–27). With 2021 approaching soon, this essay will elaborate – through the perspective of an ethical hacker – on how well the current legal framework can achieve this goal. It critically discusses the practical challenges which arise due to the present legislation, how they impede ethical hackers and the ways to mitigate these. The legal frameworks and regulations examined are CMA1990, DPA2018 and NIS2018.
The CMA1990, in its current amendment, comprises five computer misuse offences. The initial three sections are: Unauthorised access to computer material; unauthorised access with intent to commit or facilitate the commission of further offences and unauthorised acts with intent to impair, or with recklessness as to impairing, operation of a computer (Computer Misuse Act 1990 1990). The two more recently added offences are section 3ZA: unauthorised acts causing, or creating a risk of, serious damage and section 3A: making, supplying or obtaining articles for use in an offence under section 1, 3 or 3ZA (Computer Misuse Act 1990 1990).
One core concept, which is inherent to all five sections is the principle of unauthorised access. Section 1 defines it as follows: A person is guilty of an offence if “(a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer, or to enable any such access to be secured; (b) the access he intends to secure, or to enable to be secured, is unauthorised; and (c) he knows at the time when he causes the computer to perform the function that is the case” (Computer Misuse Act 1990 1990). Two main key points of this definition are the concept of mens rea – expressed by the requirement of intent and knowledge about the unauthorised access – and the broad definition of a function. Case laws have shown that good intent and motivations do not matter and that the scope of a function is immensely broad (Sommer, n.d.). The later can be as little as following a link on a non-open access computer (“Ellis V. DPP” 2001).
Therefore, it is inevitable for cybersecurity professionals to get an authorisation, as the access could else be considered punishable according to section 1. Section 17 (8) of the CMA1990 states, that access can only be authorised if it is either carried out by a “person who has responsibility for the computer and is entitled to determine whether the act can be done” or if there is consent from such a person (Computer Misuse Act 1990 1990). The later is significant for ethical hackers when they are testing third parties, as the expressed consent and the derived authorisation is what sets them apart from black hat hackers.
However, getting to a consent, which can stand up in court, can be a challenge itself. Firstly, the CMA1990 does not state a form for the consent, which would allow even for oral consent. A better approach is to use a written and signed authorisation letter. Such a letter of authorisation, often referred to as a “get out of jail card”, should clearly define who is authorised to perform the tests, what network range and during which time frame will be tested (Shinberg 2003, 9). In practice, this letter also rules out any liabilities (Vaidya 2019). Lastly, the person enunciating the authorisation must have the authority to do so (Shinberg 2003; Rasch 2013).
As authorisation can only be granted for systems under the full control of the authorising person, another challenge arises with the continuous migration of services from on-premise into the cloud (Auer 2018). Most CSP such as Amazon Web Services requires their customers to request authorisation for security assessments of certain services before the engagement (Amazon n.d.). Ethical hackers should work closely with their client as part of the preparation phase to identify assets, their location and type of service to discuss if consent from a CSP is needed (Auer 2018).
As explained earlier, a consent form should specify the target and scope of the engagement. For web applications, the scope would usually be restricted to a specific domain or IP range, defined by the principal. During a pentest, it can, however, become evident that the initial scope is too narrow or that crucial systems have been missed. Incorrect scopes can be problematic if a web app, for example, uses vulnerable third-party logins or embedded content. Due to the lack of authorisation to assess these, the pentest might be incomplete, or an update of the scope would be necessary, which might lead to additional costs or delays (NCSC 2017). Rasch (2013) furthermore notes that the defined scope should always be checked carefully, as transpositions can occur on both sides, resulting in a violation of section 1 and possibly even other sections of the CMA1990 and NIS2018 (Rasch 2013).
Nevertheless, even if both entities get the scope right, it is still questionable if the organisation has the authority to grant access. Circumstances, where this becomes an issue, are the ever-growing number of privately owned devices such as BYOD (Mansfield-Devine 2013, 17). As authorisation can only be granted for devices under the ownership of the authorising authority, no corporate assessment can be carried out without explicit permission of the employee (Haber 2020). In light of the increased attack-surface which BYOD causes, this poses a severe limitation of an organisation-wide vulnerability assignment (Marjanovic 2013).
The introduction of section 3A in the CMA1990 has been highly controversial among security tool suppliers and researchers (Stuttard 2008). This new section makes it an offence to “make, adapt, suppl[y] or offer to supply any article intending it to be used to commit, or to assist in the commission of, an offence under section 1, 3 or 3ZA” (Computer Misuse Act 1990 1990). According to section 3A (2), a person would even be guilty if he believes that it is “likely to be used” to commit an offence (Computer Misuse Act 1990 1990). This extensive definition is problematic, as it technically also includes dual-use products such as Nmap or Nessus, which are used both by legitimate and malicious users.
To resolve the ambiguity regarding the likelihood, the CPS has provided some guidance. It recommends prosecutors to take into account whether a product is widely used for legitimate tasks, its distribution and also consider the number of installations (CPS n.d.). If there are indications it is primarily for legitimate users; then the mens rea aspect of the offence would not be met. Not surprisingly, due to the ambiguity surrounding this offence, it has mostly been used in conjunction with other offences of the CMA1990 (Murray 2016, 385).
Even though third-party vulnerability researcher has proven to be successful at finding critical vulnerabilities, it is always on the brink of carrying out a computer misuse (Gamero-Garrido et al. 2017, 1501). This unlawfulness arises because vendors do not hire independent researchers, and therefore, researchers typically lack the consent for authorised access under section 17 (5) CMA1990 at the time of access.
If a researcher, for example, would use Burp Proxy to alter website requests to probe for XSS, it would meet the criteria of an unauthorised access section 1 (a). The situation is aggravated by the fact that researchers act with mens rea under section 1 (b), as he acts intending to secure access and knows about the fact that the access is unauthorised (Guinchard 2018, 13). Any POC developed to showcase the exploitation of a vulnerability can further fall within the scope of section 3A CMA1990 (Guinchard 2018, 17).
One way in which independent researchers can continue their work safely even without prior explicit consent is by adhering to the vulnerability disclosure policies of the respective vendors such as Microsoft or Google (Microsoft n.d.b; Google n.d.). These policies describe what kind of access is allowed and what is out of scope and usually guarantee the researchers a safe harbour. Which means they would not put researchers in fear of legal consequences of the unauthorised access (Microsoft n.d.a). Even though cases such as R v. Mangham show that these policies have limitations and that it is inevitable to adhere to the rules set by the vendor, it is at least one way to protect the work of independent security researchers (“R V. Mangham” 2012).
Similarly to vulnerability research, the investigation and attribution of cases of data leakage – such as unsecured S3 buckets or Elastic search instances – can be a legal challenge for cybersecurity professionals. Firstly, because the systematic brute-forcing and probing of targets using tools such as S3Scanner likely constitute unauthorised access (Cormack 2014, 309–10). While there are indications that internet services implicitly grant authorisation for specific actions of incident response, it is safer to assume that access would be unauthorised unless tested in court (Walton 2006, 40; Cormack 2014, 313).
Even after an unsecured system is found, it is frequently challenging to attribute it to an owner, which, however, is necessary to disclose it responsibly (Chapman 2018). Ethical hackers might need to retrieve a copy of the data set to confirm the authenticity, identify the owner and investigate the scale of the leak. A download, however, can be problematic under the DPA2018 as data sets often contain various forms of PII such as full names, email addresses and phone numbers (Wilson 2020). The download itself would then constitute a processing of personal data in the sense of Section 3 (4) (a) of the DPA2018.
Unlike the CMA1990, the DPA2018 has multiple defences in section 170 (3) to which a researcher may appeal. A legal right to process personal data could be preventing/detecting unlawful acts or preventing fraud (Bickerstaffe 2019). The ico (2020) nevertheless recommends transparent justification and documentation of the unlawful processing (ico 2020).
Even when carrying out a contracted penetration test, GDPR and respectively DPA2018 come into play in two ways: Firstly, the GDPR urges to carry out pentests per article 32, and secondly that a pentest has to be GDPR compliant. Rasch (2013) points out that during a penetration test, an ethical hacker might get unintended access to a system which stores or transmits sensitive personally identifiable information (Rasch 2013). Such access can occur, if, for example, drop boxes are installed to sniff the network or if a vulnerable target handles personal data. As this could constitute unlawful processing – similar to investigating the data leakage – it is vital for any cybersecurity professional to clearly define “a contract and engagement letter that tightly defines the [Rules of Engagement] for the pentester” (Davis 2020). As part of this, the depth could be limited to the point, where the pentester stops engaging, as soon as data is found (Davis 2020). If access to personal data cannot be completely ruled out, a data processing agreement can be agreed upon (Piltz 2020). This contract helps to protect both the controller and the conducting pentester to lawfully process data. Even more than usual pen testers should make sure that they handle any received data with great care, and take appropriate measures such as encrypting data in transit and rest to protect its confidentiality (Patrawala 2018).
Active defence is an umbrella term, which includes various offence-drive actions an organisation ranging from slowing down or stopping an attack at an early stage up to hack backs (Attivo Networks 2018). Internationally, the later got a fair bit of attention, when Sony Pictures used a DDoS attack to stop the distribution of sensitive data following a data breach (Glosson 2015, 3). Such measures are strictly forbidden for non-lawenforcement personell in the UK, as any unauthorised access to an adversaries system, would at least be a violation of section 1 of the CMA1990, in case of a DDoS attack, it would further violate section 3ZA. Under the current, law cybersecurity professional would be restricted to collecting data about the adversary and passing it on to law enforcement authorities. The CLRNN (2020), however, lowers expectations and warns, that law enforcement will primarily focus on events with a criminal outcome (CLRNN 2020, 15–16).
Another popular form of active defence is the use of honeypots. Honeypots are designated, vulnerable computer systems specifically designed and deliberately prepared to be targeted by unauthorised users both inside of an organisation or outside (Walden 2007, 229). Within active defence they are primarily used for uncovering new attacks, learning about the tools and processes used (Feng Zhang et al. 2003, 231; Sokol, Míšek, and Husák 2017, 2). Therefore honeypots usually log any interactions such as keystroke, network connections, and system calls invisible to the attacker (Nicomette et al. 2011, 149).
Spitzner (2003), however, points out that legal issue such as privacy surrounding honeypots can be complicated (Spitzner 2003, 1:348). This is because even the simplest honeypots record transactional data such as IP addresses or domain names, which are considered personal data according to DPA2018 section 3 as they allow the indirect identification of data subject (Sokol, Míšek, and Husák 2017, 4–5). For the processing to be lawful, it is necessary to have proper consent, unless Article 6 (1) f GDPR applies. Sokol, Míšek, and Husák (2017) assume that improving cybersecurity might be a justifiable reason, but point out, that data should be erased regularly (Sokol, Míšek, and Husák 2017, 6). Spitzer, in contrast, proposes the idea of a disclaimer, similar to a cookie banner, to mitigate the privacy concerns (Spitzner 2003, 1:357).
Besides looking at the operational challenges, it is also worth looking at the effectiveness of fighting cybercrime. To assess the performance, a set of suitable indicators is necessary. Two such indicators are the conviction and sentencing rate.
Recent numbers by HM Courts and Tribunals Service show that the conviction rate of offences under the CMA1990 has been at almost 90% over the past 11 years (Corfield 2019b). Even though this, relative, rate is on par with one of the other criminal offences, it still warrants criticism. Macewan and others (2008) outline this is partly due to low sentencing, which typically falls between six to nine months and 18 to 24 months (Corfield 2019a; Macewan and others 2008, 4). Another factor, which has to be considered is the number of CMA1990 cases, which were passed on to law enforcement by Action Fraud. In 2018 this indicator had been as low as two per cent (Corfield 2019b). The long-term effectiveness of the NIS2018 and DPA2018 can currently not be qualified due to a lack of enough qualitative and quantitative data (Government 2020).
A central operational challenge when trying to convict cybercriminals under the CMA1990 is the jurisdiction. Even though the CMA1990 applies worldwide, section 4 (2) requires at least one significant domestic link to be present (Computer Misuse Act 1990 1990). The CPS explains that is the case if the target or technology used for the offence is in the home country (CPS 2020). Providing such evidence can be a challenge in itself. With overlay networks such as Tor, which allow an almost anonymous and hard to trace connections, perpetrators can almost entirely mask the identity and location. Any gathered logs, which could provide evidence for a domestic origin, would contain only the IP of the Tor exit node, rather than the system used by the crook (CLRNN 2020, 56).
This essay started by questioning whether the current legal framework caters the NCSSS goal in making the UK more secure and resilient to cyber threats. It can be concluded that this goal was partially met. While the present legislation - with its strong focus on offenders - suits the conviction of cybercriminals, it is less accommodating for any offensive approaches to securing environments. Cybersecurity professionals, who could genuinely improve the nations’ security posture, are too often restricted by the need for express consent, elusive laws such as section 3A and data protection concerns. Given the high number of security breaches in 2020, it is desirable that future amendment, will consider the good intention of ethical hackers and provide them with the much needed solid legal ground.
Amazon. n.d. “Penetration Testing - Amazon Web Services (AWS).” Amazon Web Services, Inc. Accessed October 4, 2020. https://aws.amazon.com/security/penetration-testing/.
Attivo Networks. 2018. “What Is Active Defense?” Attivo Networks. May 31, 2018. https://attivonetworks.com/what-is-active-defense/.
Auer, Alec. 2018. “The Importance of Consent Forms When Carrying Out a Penetration Test.” The State of Security. May 14, 2018. https://www.tripwire.com/state-of-security/security-data-protection/consent-forms-carrying-penetration-test/.
Bickerstaffe, Michael. 2019. “GDPR and Fraud Investigations – Don’t Panic! - Kennedys.” May 17, 2019. https://kennedyslaw.com/thought-leadership/blogs/fraud-blog-fundamentally-honest/gdpr-and-fraud-investigations-don-t-panic/.
Chapman, Catherine. 2018. “New Tool Helps You Find Open Amazon S3 Buckets.” The Daily Swig | Cybersecurity News and Views. July 16, 2018. https://portswigger.net/daily-swig/new-tool-helps-you-find-open-amazon-s3-buckets.
CLRNN. 2020. “Reforming the Computer Misuse Act 1990.” http://www.clrnn.co.uk/media/1018/clrnn-cma-report.pdf.
Computer Misuse Act 1990. 1990. https://www.legislation.gov.uk/ukpga/1990/18/contents.
Corfield, Gareth. 2019a. “Guilty of Hacking in the UK? Worry Not: Stats Show Prison Is Unlikely.” May 29, 2019. https://www.theregister.com/2019/05/29/computer_misuse_act_prosecutions_analysis/.
Corfield, Gareth. 2019b. “If There Were Almost a Million Computer Misuse Crimes Last Year, Action Fraud Is Only Passing 2% of Cases to Cops.” September 21, 2019. https://www.theregister.com/2019/10/21/action_fraud_computer_misuse_crimes_decrease/.
Cormack, Andrew. 2014. “Can CSIRTs Lawfully Scan for Vulnerabilities?” SCRIPTed 11 (3). https://doi.org/10.2966/scrip.110314.308.
CPS. 2020. “Computer Misuse Act | the Crown Prosecution Service.” 2020. https://www.cps.gov.uk/legal-guidance/computer-misuse-act.
CPS. n.d. “Computer Misuse Act | the Crown Prosecution Service.” Accessed October 1, 2020. https://www.cps.gov.uk/legal-guidance/computer-misuse-act.
Davis, Jessica. 2020. “Evaluating Cyber Readiness, Vulnerabilities with Pen Testing.” HealthITSecurity. January 24, 2020. https://healthitsecurity.com/features/evaluating-cyber-readiness-vulnerabilities-with-pen-testing.
“Ellis V. DPP.” 2001. EWHC Admin 2001: 362.
Feng Zhang, Shijie Zhou, Zhiguang Qin, and Jinde Liu. 2003. “Honeypot: A Supplemented Active Defense System for Network Security.” In Proceedings of the Fourth International Conference on Parallel and Distributed Computing, Applications and Technologies, 231–35.
Gamero-Garrido, Alexander, Stefan Savage, Kirill Levchenko, and Alex C Snoeren. 2017. “Quantifying the Pressure of Legal Risks on Third-Party Vulnerability Research.” In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 1501–13.
Glosson, Anthony. 2015. “Active Defense: An Overview of the Debate and a Way Forward.”
Google. n.d. “Application Security – Google.” Accessed October 7, 2020. https://www.google.com/about/appsecurity/.
Government. 2020. “Review of the Network and Information Systems Regulations.” https://www.gov.uk/government/publications/review-of-the-network-and-information-systems-regulations.
Government, HM. 2016. “National Cyber Security Strategy 2016-2021.” https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/567242/national_cyber_security_strategy_2016.pdf.
Guinchard, Audrey. 2018. “The Computer Misuse Act 1990 to Support Vulnerability Research? Proposal for a Defence for Hacking as a Strategy in the Fight Against Cybercrime.” Journal of Information Rights, Policy and Practice 2 (2). https://doi.org/10.21039/irpandp.v2i2.36.
Haber, Morey. 2020. “Penetration Testing Remote Workers.” June 28, 2020. https://www.secureworldexpo.com/industry-news/penetration-testing-remote-workers.
ico. 2020. “Exemptions.” ICO. July 20, 2020. https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/exemptions/.
Johns, Emma. 2020. “Cyber Security Breaches Survey 2020.”
Macewan, NF, and others. 2008. “The Computer Misuse Act 1990: Lessons from Its Past and Predictions for Its Future.” Criminal Law Review 12: 955–67.
Mansfield-Devine, Steve. 2013. “Mark Raeburn, Context: The Evolution of Pen-Testing.” Computer Fraud & Security 2013 (11): 16–20.
Marjanovic, Zoran. 2013. “Effectiveness of Security Controls in BYOD Environments.”
Microsoft. n.d.a. “Microsoft Bounty Legal Safe Harbor.” Accessed October 7, 2020. https://www.microsoft.com/en-us/msrc/bounty-safe-harbor.
Microsoft. n.d.b. “Microsoft Bounty Programs | MSRC.” Accessed October 7, 2020. https://www.microsoft.com/en-us/msrc/bounty.
Murray, Andrew. 2016. Information Technology Law: The Law and Society. Third edition. Oxford, United Kingdom ; New York, NY: Oxford University Press.
NCSC. 2017. “Penetration Testing.” August 8, 2017. https://www.ncsc.gov.uk/guidance/penetration-testing.
Nicomette, Vincent, Mohamed Kaâniche, Eric Alata, and Matthieu Herrb. 2011. “Set-up and Deployment of a High-Interaction Honeypot: Experiment and Lessons Learned.” Journal in Computer Virology 7 (2): 143–57.
Patrawala, Fatema. 2018. “5 Penetration Testing Rules of Engagement: What to Consider.” Packt Hub. May 14, 2018. https://hub.packtpub.com/penetration-testing-rules-of-engagement/.
Piltz, Carlo. 2020. “German Data Protection Authority: Penetration Test Requires a Data Processing Agreement. | LinkedIn.” January 13, 2020. https://www.linkedin.com/pulse/german-data-protection-authority-penetration-test-requires-piltz/.
Rasch, Mark. 2013. “Legal Issues in Penetration Testing.” SecurityCurrent. November 26, 2013. https://securitycurrent.com/legal-issues-in-penetration-testing/.
“R V. Mangham.” 2012. EWCA Crim 2012: 973.
Shinberg, David A. 2003. “A Management Guide to Penetration Testing.” SANS Institute. https://pen-testing.sans.org/resources/papers/gcih/management-guide-penetration-testing-103697.
Sokol, Pavol, Jakub Míšek, and Martin Husák. 2017. “Honeypots and Honeynets: Issues of Privacy.” EURASIP Journal on Information Security 2017 (1): 1–9.
Sommer, Peter. n.d. “Two Recent Computer Misuse Cases Computers & Law,” 4.
Spitzner, Lance. 2003. Honeypots: Tracking Hackers. Vol. 1. Boston, MA: Addison-Wesley Reading.
Stuttard, Dafydd. 2008. “Business as Usual.” PortSwigger Blog. January 4, 2008. https://portswigger.net/blog/business-as-usual.
Vaidya, Rishi. 2019. “Cyber Security Breaches Survey 2019.”
Walden, Ian. 2007. Computer Crimes and Digital Investigations. New York, NY: Oxford University Press, Inc.
Walton, Richard. 2006. “The Computer Misuse Act.” Information Security Technical Report 11 (1): 39–45. https://doi.org/10.1016/j.istr.2005.11.002.
Wilson, Jim. 2020. “Major German Shopping Site Leaks Customer Data.” SafetyDetectives. September 15, 2020. https://www.safetydetectives.com/blog/windeln-leak-report/.