AI and Accountability: The Emerging ESG Risk Institutional Investors Can’t Ignore

May 20, 2025


By Jack Grogan-Fenn

A shareholder proposal filed by a Canadian trade union has called on Thomson Reuters to strengthen its artificial intelligence (AI) governance framework to meet investors’ human rights and privacy expectations.

The British Columbia General Employees’ Union (BCGEU) proposal requests that the company align its AI governance framework with the UN Guiding Principles (UNGPs) on Business and Human Rights. It also asks the firm to gauge whether its legacy trust principles are suitable in navigating increasing and emerging AI-related risks.

Thomson Reuters’ AGM will take place on Wednesday 4 June.

While acknowledging the benefits of AI, the BCGEU pointed to the potential misuse of the technology, data privacy-related issues, algorithmic biases and other human rights concerns as alarming investors in the company.

The proposal stated that firms developing and licensing AI technologies face increasing legal, regulatory and reputational risks, arguing that voting in favour of the proposal is in the interests of shareholders given Thompson Reuters’ rising exposure to such risks.

“We argue that the company’s role in these controversial practices presents a risk to investors,” the proposal read. “As [Thomson Reuters] expands its AI and genAI technology offerings, risk may increase.”

Thomson Reuters has made several investments in AI during the past few years, including new products, acquisitions and product innovation. Generative AI (genAI) particularly emerged as a key focus in the firm’s most recent Social Impact and ESG report.

The proposal stated that the company’s AI products, including genAI tools, have been “directly linked” to immigrant raids and deportations across the US, which have been “widely recognised as violating multiple rights”.

This includes Thomson Reuters’ genAI Consolidated Lead Evaluation and Reporting investigative software (CLEAR), which has been used by US law enforcement and government agencies such as Immigration and Customs Enforcement.

According to the proposal, CLEAR can capture billions of data points, including arrest, cell phone, criminal, and property records, as well as real-time geolocation data. This has piqued human rights and privacy concerns from investors, lawmakers, lawyers and human rights organisations.

Thomson Reuters settled a US$27.5 million lawsuit over the unauthorised sale of personal data and information through CLEAR. It also required improvements to the tool’s safeguards.

“AI compounds the issues of the already problematic data uses, and the rapid expansion of AI features within high-risk products like CLEAR highlights the need for explicit rights-based AI governance to boost the company’s AI strategy,” the proposal added.

Thomson Reuters has recommended that shareholders vote against the proposal, stating that it has a “comprehensive framework” of AI governance and risk management. This includes data and AI ethics principles, a responsible AI and board oversight of the technology.

“The Board is of the view that the governance structure is well-suited for effectively overseeing responsible use and development of AI,” Thomson Reuters stated.

BCGEU’s proposal acknowledges that the company’s principles recognise some risks associated with AI, but stated that its disclosures do not include a human rights impact assessment

Thomson Reuters has claimed that its principles drew on guidelines like the UNGPs, but they neither reference the UNGPs directly nor pledges to human rights due diligence or stakeholder engagement unlike fellow technology firms such as Microsoft or Salesforce.

Responsible AI and oversight of the technology is an increasing priority for shareholders as illustrated by the increase in shareholder resolutions on the topic. In total, 17 proposals have been filed for the 2025 proxy season, up from six the previous season.

Alongside the proposal at Thomson Reuters, there are AI-focused shareholder proposals set for upcoming AGMs at five further companies, including technology giants Amazon, Alphabet and Meta.

Both Amazon and Meta’s AGMs will see shareholders vote on proposals from anti-ESG shareholder group the National Legal and Policy Center asking for tighter control over the development and use of genAI.

Meanwhile, pro-ESG investors including As You Sow, and AFL-CIO have filed proposals requesting greater transparency and stronger ethical commitments on AI. Similarly to BCGEU’s proposal, the proponents argue that without clear governance frameworks then companies may be exposed to significant reputational, financial and operational risks.

You can read more of our articles by clicking here.

Last Updated: 20 May 2025