top of page

ESG and AI

Overview of general AI compliance guidelines for investors. Source: AllianceBernstein


Context


Artificial intelligence (AI) is a technology that can allow computers and machines to execute tasks that previously only humans could perform, such as those requiring problem-solving capabilities. This technology represents a significant opportunity for a wide range of users who may benefit from its capabilities yet simultaneously poses a set of risks that must be considered for AI users and developers. For investors, it is crucial to understand the current state of AI regulations and the ethical risks AI may pose to business operations and ESG activities. 


Regulation and governance for AI is rapidly developing, but progress is unequal and irregular across different jurisdictions. One example of AI regulation is the EU Artificial Intelligence Act (AIA), which was passed in March 2024 and outlines compliance obligations for AI developers and users based on a spectrum of risks that using this technology poses. AI systems are classified as unacceptable risk (such as biometric identification systems), high-risk (systems that are used in safety elements of products, such as driving assistance, and in “essential services” such as education, healthcare, banking, law enforcement, critical infrastructure, etc.), and non-high risk (such as chatbots).  


Unacceptable risk systems are prohibited, high-risk systems are obligated to follow stringent compliance laws, and non-high risk systems are generally only required to fulfill minimal transparency obligations. The table below outlines the requirements listed in the AIA for both developers and deployers of high-risk AI systems. 

Implications


ISS ESG has identified 21 ESG Corporate Rating industries (of 73 total) as “having a higher level of risk due to potential impacts on data privacy, physical safety, and discrimination. AI users in these industries may benefit from developing a quality management system that encompasses risk management, accountability, and quality control procedures to be used before and during AI deployment.

For investors and companies looking to employ AI services in their work, it is crucial to be aware of the risks that accompany AI and the appropriate measures to take for ensured transparency and safety. While developers and users outside the EU or in places where AI governance is still being developed may not have a clear framework for their obligations, proactive quality management, disclosure and transparency practices, and robust risk assessment will allow users to stay ahead of AI regulations as they are created.

The EU AI Act introduces regulations for developers and users of AI systems. The EU AI Act Compliance Checker  Tool is a free resource that helps businesses assess the risk level of their AI systems. This is crucial because the Act assigns different requirements based on risk. By understanding their risk category, companies can identify potential areas of non-compliance and take steps to mitigate them. This proactive approach can help businesses avoid hefty fines and reputational damage associated with violating the EU AI Act. Along with AI system developers, users as well are also held accountable for the tools they use for their business operations and having a tool like this to identify potential concerns can be helpful, especially in high-risk industries.

Strategic Implications


On the positive side, AI offers increased efficiency and scalability. Repetitive tasks like data collection, analysis, and reporting can be automated, freeing up our team for strategic work and client relationship building. This translates to better service for our clients and the potential to grow our client base. Additionally, AI can analyze vast amounts of data to identify trends and patterns that human analysts might miss, leading to more accurate and insightful assessments for our clients.


However, there are also challenges to consider. Implementing and maintaining AI can be expensive, with upfront costs for acquiring the technology and ongoing costs for training and upkeep.


Another concern is it can be difficult to understand how certain AI systems reach their conclusions, raising questions about the accuracy and reliability of their recommendations. Furthermore, AI algorithms can inherit biases from the data they're trained on. Careful vetting of any AI service we choose is crucial to ensure it utilizes unbiased datasets.


Comments


Commenting has been turned off.
bottom of page