Charity Audit Survey 2024

Take part in the Charity Audit Survey by the 17th of October and you’ll receive a copy of the published survey report. Additionally, we’ll enter your name into a prize draw to win a seasonal hamper. Share your valuable opinions here:

https://www.surveymonkey.com/r/TQDBDHL

 

 

Alison El-Araby and Janice Kim: Ethical dilemmas of artificial intelligence

04 Jul 2023 Expert insight

Newton’s Alison El-Araby, portfolio manager, and Janice Kim, associate portfolio manager, examine the key areas of risk related to AI and what these mean for charity investors. 

By sdecoret / Adobe

 

This content has been supplied by a commercial partner.


Two months after the release of OpenAI’s ChatGPT, it reached 100 million monthly active users, making it the fastest-growing consumer application in history.

However, some countries have since banned its usage and others are in process of restricting its use. Artificial intelligence (AI) has become much more prominent in technology solutions, but, similar to the industrial revolution, the AI revolution has both positive and adverse societal implications, and charities may be concerned about what this means for their investments. 

In the recent past, big tech companies have been slashing personnel from teams dedicated to evaluating ethical issues around deploying AI, which may lead to concerns about the safety of the new technology as it becomes widely adopted across consumer products. We have outlined some of the key areas of risk that our responsible investment team believes are important to consider while evaluating AI adoption and should be addressed by companies while advancing AI.

Unemployment 

Like many revolutionary technologies before it, AI is likely to eliminate some job categories. Many professions face significant risk of job loss, with the most at-risk professions including telemarketers and a variety of post-secondary teachers. McKinsey estimates AI automation will displace between 400 and 800 million jobs, requiring as many as 375 million people to switch job categories.

The good news is that worker displacement from automation has historically been offset by the creation of new jobs. The emergence of new occupations following technological innovation accounts for most of the long-run employment growth. The combination of significant labour cost savings, new job creation and higher productivity for non-displaced workers raises the possibility of a productivity boom that increases economic growth substantially, although the timing of such a boom is hard to predict.

Bias

Generative AI models can perpetuate and even amplify existing biases in the data used to train them. For example, a model trained on a biased dataset of news articles might generate text that reflects those biases. In addition to this, if the people training the algorithm do not represent a range of diverse backgrounds, they may not be able to account for certain biases or experiences that are relevant to environmental, social and governance (ESG) issues. This could perpetuate harmful stereotypes and discrimination. 

Misinformation

‘Hallucination’ is a big risk arising from misinformation (i.e. the potential to generate an incorrect answer with confidence). This could create ‘deepfakes’ or other manipulated content, which can be used to spread misinformation or cause harm. Generative AI employs machine-learning techniques to generate new information, so training models on publicly available information may amplify the misinformation.

Safety concerns/accountability

We have already seen examples of autonomous machines. With AI tools embedded in them, which is expected to become more common, there remains much ambiguity about liability and accountability in decision making. With AI, the health-care industry should get a good boost from autonomous AI-powered diagnostic tools, but if bad actors can combine AI technology with synthetic biology, serious problems could ensue. Bad actors, for example, may be able to synthesise viruses through AI systems that were previously unfeasible for individuals and could lead to a very dangerous pandemic—in theory. Automation could also be extended to weapons. For instance, a weapon could independently identify and engage targets based on programmed constraints and descriptions. 

Data privacy and cyber security

Malware, phishing and identity-based ransomware attacks are examples of harmful instruments that can empower bad actors. In general, these threats could have broad-based implications for cybersecurity, particularly for email security, identity security and threat detection. Generative AI models can be used to create realistic synthetic data, raising concerns about the protection of individuals’ privacy. The data collected for AI technologies are meant to train models for a good purpose but can be used in ways that violate the privacy of the data owners. 

Implications for charity investors

In our last blog, we discussed the disruptive potential of AI. If AI does turn out to be as transformative as it appears it will be, it has the potential to do so for both virtuous and, unfortunately, nefarious reasons. But technology has always been accompanied by a fear of the unknown.

Although AI could open up a number of investment opportunities, these ethical issues may also represent a reputational risk for charities, and they could additionally have direct impacts on the way charities operate. For example, a rise in unemployment could lead to increased pressure on charities’ resources and, given that charities are particularly vulnerable to cybersecurity threats, AI could make threats more sophisticated and harder to identify. 

While we do not yet know how the AI revolution will unfold, charities may want to consider how the risks, as well as the opportunities, are considered in their investment portfolios. At Newton our responsible investment team works closely with our analysts and portfolio managers to help identify risks such as these. For a theme as disruptive as AI, we keep a close eye on these potential issues which could become material to the investments we make. In addition, our active investment approach allows us to discuss these risks with companies, where relevant. 

Alison El-Araby is portfolio manager, and Janice Kim is associate portfolio manager at Newton

 

More on