Artificial Intelligence, Trust and the Cybersecurity Industry: What’s Cooking?

CBR Image - Blog Inset 16:9 Image by Franck V on Unsplash.png

22-Jul | Written by Vanessa Henri & Linda Agaby

The overwhelming importance of artificial intelligence (“AI”) in the strategic economic position of Canada moving forward is hard to deny. In 2017, the Government of Canada had already appointed the CIFAR, which works in close collaboration with the Amii, Mila and Vector Institutes, to develop and lead a $125 million Pan-Canadian Artificial Intelligence Strategy, the world’s first national AI strategy.

On June 15th, 2020 the Government of Canada, along with representatives of the 14 other founding members of the Global Partnership on Artificial Intelligence, announced a new initiative to advance the responsible development of AI. As part of this, it also announced that it would join the Government of Québec’s efforts through the opening of the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence (“ICEMAI”).

On March 2nd, 2020 ZDNet published, as part of a special report titled: Cybersecurity: Let’s Get Tactical, an article which began with: “AI is changing everything about cybersecurity, for better and for worse”. On June 12th, 2020, Meticulous Market research Pvt. Ltd published a market research report titled “Artificial Intelligence (AI) in Cybersecurity Market”, noting that AI in the cybersecurity market is expected to grow at a CAGR of 23.6% from 2020 to 2027 to reach $46.3 billion by 2027. Clearly, AI will be tactical in the future of cybersecurity, so let’s deep dive into some key questions:

  • How did AI change the cybersecurity industry?

  • What should cybersecurity firms seeking to integrate AI in their tools and practices take into consideration?

  • Who’s liable for these algorithms, and how?

How did AI change the cybersecurity industry?

There are the good, the bad… and the ugly.

Let’s start by acknowledging that AI can be of tremendous help to malicious actors. To begin with, AI can help improve the efficiency of already known threats. As an example, let’s take phishing attacks. Malicious actors can use AI to shore up speak attacks by improving their capacity to mimic writing styles (1). For example, the Emotet Trojan can generate massive amounts of phishing emails that are contextualized and which inscribe themselves within email chains, and AI that masters natural language could make these attacks even more believable (2). AI could also be used to conduct much of the reconnaissance work a skilled hacker might perform in preparation for an attack in identifying the weakest targets within an organization and scanning data sets for valuable information (3). Furthermore, AI can also be used to create new technical threats, such as through malicious codes that can self-adjust to their environments, such as by learning what types of code cybersecurity systems are looking for in order to mutate and bypass the system (4).

That being said, in the face of heightened cybersecurity threats, defensive AI can also be deployed to support IT teams by identifying threats and reducing reaction time. For example, while machine learning allows malicious actors to develop code that evades detection, machine learning can also be used as a defensive strategy to discover malicious code that is bundled with other code (5). AI can be used to detect, predict and respond to a greater volume and complexity of cyber attacks (6).

  • Scanning systems for unauthorized devices: AI can be used to detect unauthorized and potentially threatening devices in the network.

  • Automated malware defense: The increasing amount of malware makes it necessary to use automated methods to sift and identify malware attacks before they can affect a system (7). AI can be programmed to identify suspicious code and potentially malicious behaviour using heuristic algorithms that are trained on previous examples of malicious software to identify malicious code.

  • Using AI systems to develop cybersecurity plans such as attack trees which allow you to develop and constantly update stronger cybersecurity protections (8).

  • Automated phishing detection: Phishing attacks are one of the greatest vulnerabilities of many companies.

  • Bot defender strategies: Machine learning techniques can also be used to identify and destroy bots.

So, there are the ugly and the good… what’s the bad? Well, the proliferation of AI in the daily lives of customers creates issues as well, especially as AI technology finds application in data protection sensitive fields such as healthcare. This new application introduces novel cybersecurity risks (9). For example, employees working at home could have their virtual assistants hacked to pick up private information. Moreover, use of intelligent technologies in healthcare or in public utilities could increase vulnerability to ransomware attacks (10).

Nonetheless, the opportunities are important. As a result of COVID-19 and related lockdowns, organizations are functioning with “work from home” policies that increase reliance on remote access systems, and therefore, makes organizations more vulnerable to DDOS and phishing attacks, without mentioning that excessive dependence on network usage increases the risks of ransomware attacks, data theft and data breaches, with challenges on the detection and alerting capabilities.

The use of AI can go a long way in making this new work arrangement more secure. According to the research published by Meticulous Research on AI in Cybersecurity, based on security, the network security segment is “estimated to account for the largest share of the overall artificial intelligence in cybersecurity market in 2020, owing to the increasing number of APTs, malware and phishing attacks, and the BYOD policies”. Looking at applications, identity and access management accounts for the largest share of overall artificial intelligence in cybersecurity market in 2020. This brings us to the next point.

What should cybersecurity firms seeking to integrate AI in their tools and practices take into consideration?

Firstly, you should know that AI development and integration does not come without its challenges, as it requires a lot of data. As a starter, AI also has data-specific threats, and must be secured. For instance, a recent report of the Berryville Institute of Machine Learning titled ‘An Architectural Risk Analysis of Machine Learning Systems’ demonstrates that there are significant risks to be considered, such as data poisoning. Therefore, care should be taken to ensure the proper development of AI and reduce cybersecurity threats that could affect the algorithms.

It also has privacy-related concerns to tackle with. Data protection authorities worldwide are engaged in close scrutiny of the AI industry, as demonstrated by the recent consultation of the Canadian’s Office of the Privacy Commissioner. They generally share concerns around the following issues, which tend to be more specific to machine learning and deep learning:

  • Transparency: It is difficult to explain the processing of personal data if it is not known which outcomes will result from the algorithms.

  • Consent: It is difficult to obtain an informed consent if the purposes are not known in advance and explained through transparency.

  • Individuals’ Rights: It may not be possible to withdraw consent, request for the data to be de-indexed or to restrict the processing of personal data from individuals which are inputted in an AI system.

The truth is that each system is unique, but to avoid compliance challenges, organizations should ensure that privacy-by-design is implemented at the conception phase through a privacy impact assessment. As with blockchains, it is often necessary to develop in a certain manner to meet the requirements of the target market. If you fail to do so early, you may not be able to enter this market at all, as your system will be non-compliant by default. These are mistakes that can be costly. Also, the best approach is to get an independent auditor to perform this for you – someone who is not attached to the project and understands the data protection requirements would be best qualified to guide you.

While critics have been saying that governments are prioritizing the economic benefits of AI at the expense of privacy, it remains that responsible AI has also developed as an improvement to the overall strategy. The idea that AI should be ethical is interesting, as it pushes the concerns beyond privacy to include diversity, risk management, environmental concerns and information security through an approach called trust-by-design. Indeed, trust has become an important word to describe how technologies should be developed, as demonstrated by the recent European Commission’s white paper “On Artificial Intelligence – A European approach to excellence and trust”, published on February 19th, 2020 and the Draft Guidelines on Building trust in Human-Centric Artificial Intelligence published by the European AI strategy, for which consultation just concluded on June 14th, 2020. The key values identified were as follows:

  • Human agency and oversight;

  • Technical robustness and safety;

  • Privacy and Data Governance;

  • Transparency;

  • Diversity, non-discrimination and fairness;

  • Societal and environmental well-being; and

  • Accountability.

To enforce these values, organizations such as AI Global are working on practical tools, such as the Responsible AI Check which aims to create a certification mark based on the AI Trust Index once it will be deployed. Interestingly, AI Global also has a map titled “Where AI has gone wrong”, which “represents historical instances of where AI has adversely impacted society in a specific domain”. Prior to integrating AI into your products, you would be well advised to complete an algorithmic impact assessment to ensure that your AI does not discriminate and is built in a manner that is as ethical as possible. While this exercise is not always mandatory, the Directive on Automated Decision-Making makes it mandatory in some cases at the federal level. If you’re targeting this industry, make sure that you anticipate due diligence and audit requirements, as you will otherwise waste time and energy on procurement requirements such as filling RFPs, without knowing that you ultimately won’t qualify.

These points emphasize how AI can also be a liability, and may lead to reputational damages if not well built, and this brings us to our last question.

Who’s liable for these algorithms, and how?

Algorithms don’t have a moral personality of their own. In fact, they cannot even meet the definition of an inventor under patent registration, as recently pointed out by the United States Patent and Trademark Office in response to Application No.:16/524,350 “Devices and Methods for Attracting Enhanced Attention”. Yet, this is not a question that is as easy as it looks. If an AI system fails, who is responsible? Is there a distinction between the seller and the manufacturer of the AI? Are AI systems considered products or services? While the courts and legislators must still develop this framework in Canada, we provide you with our interpretation of the current situation, as we hope it will entice organizations to take the right precautions.

There are many potential sources of liability. We believe that defective product laws are very likely to apply to algorithms. Let’s take Québec as an example, since it has the strongest consumer protection laws in Canada. The consumer cannot waive the jurisdiction, which means that he or she can always profit from a trial in Québec notwithstanding anything to the contrary in the contract, in which organizations and their representatives cannot limit their liability for their acts or for bodily or moral damages. Mandatory arbitration is also prohibited. The law includes an implicit guarantee that the product must be able to serve the usage for which it is destined, and that it must be usable for a reasonable amount of time. A professional seller is legally presumed to have knowledge of any defect affecting its products, and the rebuttal is subject to a stringent test, and manufacturers are subject to the strongest presumption of knowledge. The defect is also presumed to exist at the moment of the sale, and the seller cannot limit its liability if it is presumed to know of a product defect. All in all, this means that if you intend on selling products to consumers which will include AI, you must navigate a high-liability ground, and take all necessary precautions. Even if you limit your responsibility with another commercial entity that would resell, as a manufacturer, you will be held liable either way towards third parties.

Another area where liability cannot be managed through contractual agreements is regarding statutory liability. For instance, a breach of privacy law exposes organizations to legal suits by industry watchdogs. While traditionally the impacts have been mitigated in Canada, the recent Bill 64 proposes to extend fines to GDPR levels in Québec (up to $25M or 4% of the global revenues) and introduces a private right of action based on the legislation. We expect something similar to be deployed at the federal level as the Office of the Privacy Commissioner already announced a reform of the Personal Information Protection and Electronic Documents Act in the same period of time it announced the Consultation on AI. Organizations in Québec could also be held liable under the Charter of Human Rights and Freedoms which provides for punitive damages and explicitly includes the right to be protected from discrimination and the right to privacy.

While Canada still has to develop a strong position on this, the European Commission’s Expert Group on Liability and New Technologies has already released a report on Liability For Artificial Intelligence, which offers high-level recommendations on how liability regimes can be adapted to meet challenges posed by AI. The report concludes that certain characteristics of AI, such as limited predictability, opacity, complexity and autonomy, may be challenges to traditional legal systems, and suggests measures for operator and producer’s strict liability, similarly to what exists in Québec. Based on the foregoing, organizations seeking to develop AI will have to navigate a legal framework that will be moving constantly and should be ready to participate actively in the policy debate, to adapt, and to respond to liabilities with adequate measures at the preventive and reactive levels.

Overall, AI in cybersecurity cannot be avoided. The questions are when, and how? Our advice: Manage your risks. Do it well. Innovate.

References

  1. https://www.capgemini.com/wp-content/uploads/2019/07/AI-in-Cybersecurity_Report_20190711_V06.pdf

  2. https://www.weforum.org/agenda/2019/06/ai-is-powering-a-new-generation-of-cyberattack-its-also-our-best-defence/

  3. https://www.weforum.org/agenda/2019/06/ai-is-powering-a-new-generation-of-cyberattack-its-also-our-best-defence/

  4. https://www.weforum.org/agenda/2019/06/ai-is-powering-a-new-generation-of-cyberattack-its-also-our-best-defence/

  5. https://www.zdnet.com/article/ai-is-changing-everything-about-cybersecurity-for-better-and-for-worse-heres-what-you-need-to-know/

  6. https://www.capgemini.com/wp-content/uploads/2019/07/AI-in-Cybersecurity_Report_20190711_V06.pdf

  7. https://towardsdatascience.com/cyber-security-ai-defined-explained-and-explored-79fd25c10bfa

  8. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8449268

  9. https://www.techrepublic.com/article/how-ai-ml-and-automation-can-improve-cybersecurity-protectionread-insights-from-industry-experts-on-how-artificial/

  10. https://www.technologyreview.com/2018/08/11/141087/ai-for-cybersecurity-is-a-hot-new-thing-and-a-dangerous-gamble/

Vanessa Henri

LAWYER, EMERGING TECH • FASKEN • MONTREAL, PQ

https://www.linkedin.com/in/vanessahenri/
Previous
Previous

Canada’s new Cyber Security Program for SMOs: An inside look at the development of Canada’s National Cyber Security Standard

Next
Next

Privacy in the Age of Facial Recognition