Managing Risk in the Adoption of Generative AI

April 16th 2024

By Marleen Mavrow, CISM, CRISC, PMP, GRC Practice Lead & Privacy Officer, Charter

The Opportunity

With the public release of ChatGPT in November 2022, Generative Artificial Intelligence (Generative AI) launched into the public consciousness and quickly dominated headlines. (1) Within two months, it gathered over 100 million users, making it the fastest growing consumer application in history. (2) Since then, several tech giants have launched competing Generative AI applications and services, eager to seize market position in this fast-growing area.

A robot hand touching a circular lightDescription automatically generated

Generative AI promises to provide significant benefits to individuals, corporations, and society if it can be harnessed effectively. New insights and trends can be gleaned from stagnant data, helping to foster creativity and stimulate product design. Enhanced automation will drive efficiency and cost reduction, as well as the detection of threats and vulnerabilities. 

With these enticing new opportunities, organizations are seeking ways to leverage the power of Generative AI. Yet, there are growing concerns around privacy, accuracy, and ethical use. Even if organizations are not ready for implementation, its prevalence within industry and public availability indicates it could have potential influence on staff, customers, partners, and other stakeholders. Executive leaders are seeking ways to put in place governance strategies while simultaneously harnessing this technology to ensure safe, secure, and predictable business outcomes. 

This paper will explore approaches for safe adoption of Generative AI technology that can be leveraged across all industries, as well as organizations of any size.

In full disclosure, this paper was produced entirely with human intelligence, with no source data from any Generative AI tools. 

The Challenge Today

Perhaps surprisingly, Artificial Intelligence has been in use for a long time. Alan Turing already saw its potential as early as 1950, per his paper “Computing Machinery and Intelligence”, but it took the computational power and storage capabilities of modern computers to enable large-scale progress. (3) By 1997, IBM’s famous ‘Deep Blue’ computer beat world chess champion Gary Kasparov, (4) and a year later Cynthia Breazeal introduced KISmet, an emotionally intelligent robot. (5) In 2011, Apple integrated Siri into the iPhone 4S (6) and Amazon introduced Alexa in 2014. (7) 

Starting around 2010, Artificial Intelligence went mainstream, allowing organizations to create business value, particularly in the banking, marketing, and entertainment sectors. (8) Popular uses included data processing, predictive behavior (for sales, market trends, or financial performance), as well as data validation and protection, to sniff out anomalies from vast amounts of information. While these tools remained important for corporations, they were corporate-owned, robotic, and static in nature.

While related to those prior works, Generative AI is a quantum leap forward as it can easily synthesize and “generate” net-new data, at scale, almost instantaneously - which is where its name was derived. Its power to quickly process, learn, create, and provide content in multiple formats is transformative across every industry sector. Disruption is already occurring in traditional business fields. In a recent Dell survey, 76% of IT decision makers estimate that Generative AI will have a considerable impact on their organization. (9)

Significantly, this power can be harnessed by individuals, eliminating the barrier that separates domain experts from users. Its positive impact is evidenced in Generative AI-created music, art, education, and film. 

However, cybercriminals also have the same ability to harness Generative AI, creating a notable cyber risk to individuals, business, and national security. Due to the rate at which a malicious Generative AI system could operate, many cybersecurity experts see Generative AI as the only practical response to those same threats. (10)

A hand reaching out to the skyDescription automatically generated

Additionally, there are vulnerabilities within Generative AI guardrails, such as manipulative prompting, that can be bypassed or exploited. If realized, this can lead to unauthorized data access or technology manipulation, such as embedding harmful instructions or generating inappropriate content. 

While this technology presents tremendous opportunity, there are many questions it raises including:

  • Are the outputs consistently accurate?
  • Are the outcomes fair?
  • Can the results be explained?
  • Is there a hidden or overt bias?
  • Are there suitable safeguards?

Canada’s finance regulator, the Office of the Superintendent of Financial Institutions (OSFI), has described the ethics of generative AI as a complex endeavor, where “One person’s view of fairness may differ from another’s.” (11)

As organizations rush to embrace this innovation accelerator, some pitfalls have already occurred. Deepfakes of individuals have led to inappropriate payments (12) and chatbots have provided false recommendations. (13) If Generative AI creates a new song, who ultimately owns the song? And if it infringes upon another artist’s copyright, who is at fault? 

Generative AI is lurking behind the scenes, shifting the way businesses operate and changing the mechanics of operations. Traditional understandings, processes, and technologies are being upended and this presents new layers of risk and uncertainty.

The Path Forward

Many organizations have a strong desire to leverage Generative AI and experimentation with Generative AI tools is relatively common by all seniority levels, both within and outside of work. (14) Given this widespread use, the question is no longer “if” but “how.” It is, therefore, necessary for organizations to build guidance on the appropriate use of Generative AI, followed by a business case for the adoption of Generative AI technology. 

Gain Consistency

A person using a tabletDescription automatically generated

As a first step, organizations should put in place an Acceptable Use Policy (AUP) around the use of Generative AI tools. Such guidance would provide clear expectations around the use of these tools, particularly noting activity that is acceptable or prohibited. Without a clear policy in place, organizations risk the loss or exposure of data, as well as potential security compromises. There are a growing number of common policies organizations can easily adopt and customize to meet their desired culture and sensitivities. 

Make a Case

As organizations look to deploy Generative AI, a business case should be established outlining the following important factors:

  • Fit for Use - Organizations should clearly define the intended use and desired outcomes of Generative AI and its suitability for the organization. Before building solutions, it would be valuable to compare those desired outcomes against industry trends, as well as research and consulting firms to confirm it is fit for use.
  • Value - The expected benefits of utilizing Generative AI should be clearly quantified, including new or improved products and services, efficiencies through automation, financial gains, personalization, and improved business insights via data analysis. The value should be qualified against the risks outlined below.
  • Areas of Concern - Any Generative AI project should be reviewed to identify the areas of greatest risk to success. It would require careful understanding, management, and oversight, (in particular, bias, consistency, and explainability), to ensure outcomes are fair, accurate, compliant, and trustworthy. 
  • Management of Risk - A detailed assessment of the risks Generative AI poses to the organization, resources, systems, and data should be made. Where possible, risk mitigation strategies should be built to manage those risk areas. 
  • Implementation - Expectations for implementation should be defined, including security and privacy by design, leveraging industry frameworks for robust control selection, and clearly defined milestones. Because Generative AI is groundbreaking, it is generally recommended that implementation be conducted in iterative stages (such as Proof of Concept, Operationalization, Enablement) with rigorous testing to ensure outputs do not inadvertently provide false, misleading, or unethical outputs and that value goals are being met.
  • Compliance - Throughout the project, it is essential the organization verifies that outcomes of the technology will adhere to laws and regulations, particularly those involving data privacy.

Why Ethical and Legal Considerations are a Big Deal

Ethics Matter

As a new, complex technology, the ramifications of Generative AI can have far reaching ethical and legal repercussions that can quickly, and significantly, harm businesses if not appropriately addressed up front. To reduce risk, the following areas should be addressed in a business case for Generative AI adoption.

A person typing on a computerDescription automatically generated
  • Bias - Bias can be found in several areas, including test data that doesn’t have appropriate representation, algorithms, or the data used by the algorithm. While some bias could be intentional, such as intentional conservative forecast predictions, organizations need to ensure there isn’t unintended bias that creates incorrect, distorted, or unintended outputs that could damage public trust. For example, Amazon stopped using a hiring algorithm after finding bias against women. (15) 
  • Accuracy - Accuracy is based on several factors, including accuracy of the Large Language Models (LLMs) used by Generative AI tools in combination with test data. Accuracy should be validated against changing conditions and varied inputs to ensure outcomes are real and correct. As Generative AI present outcomes in a confident, verbose manner, users may not recognize a fictitious output (often referred to as an “hallucination.”) For example, in December 2023, a Vancouver lawyer provided precedent cases found using Chat GPT for a divorce proceeding, only to discover that the cases were in fact hallucinations. The lawyer was reprimanded by the courts and required to pay financial compensation. (16)
  • Consistency & Reliability - Conditions and inputs into Generative AI tools are constantly changing and due to its generative nature, can provide significantly different results between two queries using identical prompts, potentially impacting the consistency and reliability of outputs. Validating the quality of the data sets utilized by the LLMs will establish a stronger foundation for evaluating outputs, ensuring they are real and correct, as well as promote transparency. This was a poignant lesson for Uber when their self-driving vehicle killed a pedestrian, when the AI-powered advance driver assistance system did not recognize a jaywalker on the highway. (17)
  • Explainability - As outcomes are generated, organizations and individuals need to have a full understanding of the AI decision-making process, including monitoring and accountability. Outcome validation must occur at the granular level, allowing for auditability and traceability of results. This also mitigates compliance, legal, security, and reputational risks of Generative AI. Lack of explainability will have real-world implications ranging from loan approvals, travel advice, to healthcare diagnoses. Organizations that cannot explain its decisions may run afoul of customers and the courts.
  • Data & Privacy - One of the key uses of Generative AI is for efficient data synthesis that leverages its powerful LLMs, particularly to handle the growing volume of data as well as improve fraud detection. However, its heightened intelligence can cause Generative AI to deduce personal information through data aggregation. Organizations are encouraged to incorporate safeguards, such as avoiding inputs of sensitive or personally identifiable information, to ensure outputs are not biased, that personal information can be removed upon request, and to improve compliance with privacy laws. This also has the benefit of increasing digital trust. A 2021 McKinsey report found that 40% of responders would take their business elsewhere if it learned that the organization was not protective of its customers’ data. (18)
  • Intellectual Property (IP) - A number of uncertainties arise with Generative AI and IP rights. Careful consideration is required to ensure training data, as well as content generated by Generative AI tools, is original and does not pose a risk to infringement or likeness, or be a source for plagiarism. There are also opportunities for the abuse of an organization’s own IP caused by the sharing of private data to public systems. Many companies, such as Amazon and Samsung, are preventing the upload of internal data to public Generative AI services for this reason. There is also little consideration yet towards the ownership of inferred data. If Generative AI infers a high-value opportunity from a user’s data, does the opportunity belong to the data owner, or the tool owner? (19)

Organizations are Responsible for Outcomes

Regardless of people, process, or technology, organizations will always be legally responsible for ensuring they comply with current laws and regulations - particularly provincial and federal privacy laws and legislation. Using a new technology is not a valid excuse to regulators or the courts for bypassing laws. For example, Air Canada recently argued to BC courts it wasn’t responsible for outputs from its Generative AI-powered chatbot – and were unsuccessful. (13) Given this, organizations need to consider the ethical influence and ramifications of Generative AI outcomes by their organization.

Where are Industry Standards, Regulations, and Legislation?

Legislation is Playing Catch-Up

Current laws and older legislation are silent on Generative AI, as most were adopted prior to the Generative AI revolution. Still, Canada’s privacy legislation, PIPEDA, is a ‘consent’-based legislation, whereby organizations must obtain consumer consent before using their information, regardless of the tools they utilize. (20)

A gavel on a tableDescription automatically generated

In response to fraud and the rapid rise of ransomware, Canada is moving forward to modernize its legislation. 

  • Bill C-26 - Proposes new cybersecurity requirements to protect vital Canadian systems and services against digital threats and hostile actors. This bill will apply to Financial Institutions, Telecommunications Services, as well as Power and Nuclear Systems. (21)
  • Bill C-27 - Proposes amendments to PIPEDA to regulate the collection, use, and disclosure of personal information in Canada, and will apply both federally as well as to private sector organizations. Additionally, the new law will introduce new requirements for the use of Generative AI. (22)

Regulators can Offer Sage Advice

In addition, federal and provincial regulators are regularly issuing guidance on Generative AI, including provincial health authorities, as well as OSFI, Canada’s Financial Regulator. OSFI, together with the Global Risk Institute formed the Financial Industry Forum on Artificial Intelligence and have put forward recommended safeguards and risk management strategies, grouped into 4 principals (the EDGE Principal): Explainability, Data, Governance, and Ethics. (23) Its recommendations are applicable to any industry and should be leveraged into Generative AI business cases and designs.

Why Wait?

It is not in an organization’s best interest to wait for regulators to issue guidance or laws to come into effect – they should take steps to get ahead of the curve. Steps are already available, from respected standards and frameworks, including: 

  • NIST - AI Risk Management Framework 1.0 (24)
  • ISO/IEC 23894 - AI Guidance on Risk Management (25)
  • ISO/IEC 42001 - AI Management System (26)
  • ISO/IEC 23053 - Framework for AI Systems using Machine Learning (27)

Moving Ahead with Confidence

A person using a tabletDescription automatically generated

Generative AI will likely be the most transformative technology of our time and represents exciting opportunities for efficiency, growth, and protection. As this new technology unfolds, it must be guided as a tool that can enhance, rather than undermine potential and creativity. Organizations can move forward with confidence and overcome risk with departmental planning leaders taking the following steps.

  • Acceptable Use - Set expectations of how Generative AI is utilized at your organization through an acceptable use policy.
  • The Case for AI - Establish a vision and roadmap for Generative AI within your organization. Develop a business case that clearly states its uses and intended outcomes. This will also ensure ethics, explainability, accuracy, bias, and privacy can be suitably addressed at the start and throughout implementation.
  • Leverage Best Practices - Become familiar with industry frameworks and standards to ensure their recommended best practices are incorporated. Familiarize your organization on upcoming laws and regulations.
  • Multidisciplinary Views - Obtain a broad perspective from multiple departments, including Information Technology, Privacy, Human Resources, Legal, Operations, and Marketing during all stages of Generative AI adoption.
  • Validation - It is essential that organizations validate and test their use of Public, as well as Private, AI environments, pressuring them to ensure the data produced reflects their culture and expectations for accuracy.
  • Transparency - As organizations move forward, it should communicate its intent within the organization through strong policies, as well as externally to consumers, fostering a spirit of digital trust and transparency.

While Generative AI does not necessarily change the entire business model of an organization, it can be a catalyst for growth. Leaders that take the above steps will enhance organizational resilience, retain talent, accelerate digital transformation, and position their company well to compete in their marketplaces.

About the Author

Marleen Mavrow CISM, CRISC, PMP Charter, GRC Practice Lead& Privacy Officer

Marleen Mavrow is Charter’s GRC Practice Leader and Privacy Officer and brings 20+ years of experience in strategic planning and operational success. Focused on success through teamwork, collaboration, and stakeholder management, Marleen has led many business transformation engagements with small, medium and large-scale clients across various Canadian sectors including Financial, Transportation, Technology, and Public sectors, leading GRC and Digital Transformation efforts. A proven leader with strong analytical and communications abilities, Marleen aims to align IT practices to business goals. 

Marleen is a member of ISACA, DAMA and PMI and is an active member of the ISACA Vancouver Chapter Board, championing the advancement of digital trust by empowering IT professionals through growth in knowledge and skills.

About Charter

Charter is an award-winning IT solution and managed services provider established in 1997 in Victoria, BC, Canada. Our mission is to align people, process, and technologies to help build better organizations, enhance communication, boost operational performance, and modernize businesses.

Our team of experts leverage a business architecture methodology and a human-centered design approach to drive successful digital transformation for our clients, unlocking new opportunities, generating new value, and driving growth. Charter offers a comprehensive range of services, including advisory and consulting services, project services, and managed services. This allows the company to provide end-to-end solutions to clients, from planning and design to ongoing support and implementation. We offer knowledge and support that extends beyond our clients’ businesses, empowering them to focus on their core operations. Charter helps businesses generate new value, drive growth, and unlock new opportunities, enabling them to go to market faster and more effectively. Let Charter drive business outcome Forward, Together.

For more information on Charter, please contact:

Dawn van Galen

Marketing Manager

250-412-2517

DVANGALEN@CHARTER.CA

WWW.CHARTER.CA

Sources

(1) OpenAI. (2022, November 30). Introducing ChatGPT. https://openai.com/blog/chatgpt 

(2) Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ 

(3) Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. https://doi.org/10.1093/mind/lix.236.433 

(4) IBM. (n.d.). Deep Blue | IBM. Www.ibm.com. https://www.ibm.com/history/deep-blue 

(5) Graham-Rowe, D. (n.d.). Meet Kismet ... New Scientist. https://www.newscientist.com/article/mg15921480-800-meet-kismet/ 

(6) Apple Launches iPhone 4S, iOS 5 & iCloud. (n.d.). Apple Newsroom. https://www.apple.com/newsroom/2011/10/04Apple-Launches-iPhone-4S-iOS-5-iCloud/ 

(7) Stone, B. (2021, May 11). The Secret Origins of Amazon’s Alexa. Wired. https://www.wired.com/story/how-amazon-made-alexa-smarter/ 

(8) Before generative AI there was... just AI. (n.d.). CIO. Retrieved April 24, 2024, from https://www.cio.com/article/656697/before-generative-ai-there-was-just-ai.html 

(9) Generative AI Pulse Survey. (n.d.). Retrieved April 24, 2024, from https://www.delltechnologies.com/asset/en-us/solutions/infrastructure-solutions/templates-forms/dell-technologies-genai-pulse-survey.pdf 

(10) What Generative AI Means for Cybersecurity in 2024. (2024, February 8). Trend Micro. https://www.trendmicro.com/en_ca/research/24/b/generative-ai-cybersecurity-2024.html 

(11) Office. (2023, April 30). How ethical subjectivity complicates AI regulation - Office of the Superintendent of Financial Institutions. Osfi-Bsif.gc.ca. https://www.osfi-bsif.gc.ca/en/about-osfi/reports-publications/financial-industry-forum-artificial-intelligence-canadian-perspective-responsible-ai/ethical-subjectivity-complicates-ai-regulation 

(12) Kohli, A. (2023, April 29). AI Voice Cloning Is on the Rise. Here’s What to Know. Time. https://time.com/6275794/ai-voice-cloning-scams-music/ 

(13) Proctor, J. (2024, February 16). Air Canada found liable for chatbot’s bad advice on plane tickets. CBC. https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416 

(14) McKinsey & Company. (2023, August 1). The state of AI in 2023: Generative AI’s breakout year. Www.mckinsey.com. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year 

(15) Dastin, J. (2018, October 11). Insight - Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G/ 

(16) Proctor, J. (2024, February 27). B.C. lawyer reprimanded for citing fake cases invented by ChatGPT. CBC. https://www.cbc.ca/news/canada/british-columbia/lawyer-chatgpt-fake-precedent-1.7126393 

(17) McKendrick, J., & Thurai, A. (2022, September 15). AI Isn’t Ready to Make Unsupervised Decisions. Harvard Business Review. https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions 

(18) Boehm, J., Grennan, L., Singla, A., & Smaje, K. (2022, September 12). Digital trust: Why It Matters for Businesses | McKinsey. Www.mckinsey.com. https://www.mckinsey.com/capabilities/quantumblack/our-insights/why-digital-trust-truly-matters 

(19) Ray, S. (n.d.). Samsung Bans ChatGPT Among Employees After Sensitive Code Leak. Forbes. Retrieved April 24, 2024, from https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak 

(20) Canada, O. of the P. C. of. (2015, June 23). PIPEDA legislation and related regulations. Www.priv.gc.ca. https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-and-electronic-documents-act-pipeda/r_o_p/#:~:text=change%20and%20review- 

(21) C-26 (44-1) - LEGISinfo - Parliament of Canada. (n.d.). Www.parl.ca. https://www.parl.ca/legisinfo/en/bill/44-1/c-26 

(22) C-27 (44-1) - LEGISinfo - Parliament of Canada. (n.d.). Www.parl.ca. https://www.parl.ca/LegisInfo/en/bill/44-1/C-27 

(23) Institutions, O. of the S. of F. (2023, July 17). Financial Industry Forum on Artificial Intelligence: A Canadian Perspective on Responsible AI. Www.osfi-Bsif.gc.ca. https://www.osfi-bsif.gc.ca/en/about-osfi/reports-publications/financial-industry-forum-artificial-intelligence-canadian-perspective-responsible-ai 

(24) Tabassi, E. (2023). AI Risk Management Framework. Artificial Intelligence Risk Management Framework (AI RMF 1.0). https://doi.org/10.6028/nist.ai.100-1 

(25) 14:00-17:00. (n.d.). ISO/IEC 23894:2023. ISO. https://www.iso.org/standard/77304.html 

(26) 14:00-17:00. (n.d.). ISO/IEC DIS 42001. ISO. https://www.iso.org/standard/81230.html 

(27) 14:00-17:00. (n.d.). ISO/IEC 23053:2022. ISO. https://www.iso.org/standard/74438.html