17 December 20249 minute read

The AI revolution: Transforming the asset management industry in Europe and beyond

AI is at the forefront of the current digital transformation. And the financial markets aren’t immune to this rapidly evolving technology.

Asset managers are integrating AI into their operations and trying to race against the clock to harness its power and reap its profits. But they should be aware of the implications of the recently passed Artificial Intelligence Act (AI Act) for the asset management industry. And they need to know the necessary steps asset and fund managers should take to navigate this new EU regulatory landscape.

 

AI in the financial services industry

According to a recent study, 64% of businesses believe that AI will increase their productivity, while 40% of business owners are concerned about technology dependence. Other estimates show that among industries globally, generative AI could add the equivalent of between USD2.6 trillion and USD4.4 trillion of economic value annually.1

The asset management industry has been notably receptive to AI. From using robotic process automation (RPA) in portfolio management processes, to identifying and executing investment trends in high frequency mode, and performing document analysis and comparisons in seconds, asset managers worldwide have embraced the benefits AI.

AI technologies are employed across multiple tasks and functions, including risk management, asset allocation, and client engagement. Blackrock’s Aladdin and J.P. Morgan’s COiN are examples of global asset managers using AI platforms for portfolio management, risk analysis, market insights and client interaction purposes. Moreover, AI-driven robo-advisors have emerged as a valuable tool for asset managers, which prove particularly helpful in managing smaller accounts.

One notable application of AI is predictive analytics, where asset managers use AI to forecast market trends and asset performance. Natural language processing (NLP) technologies are being used to analyse news articles, social media, and financial reports, providing insights into market sentiment and potential investment opportunities.

 

What’s happening in Luxembourg?

Having anticipated the rapid development of new tech, Luxembourg was one step ahead by introducing a national AI strategy in 2019, aiming to make the country one of the most digitally advanced societies globally. The strategy emphasizes an anthropocentric AI framework, promoting collaboration across borders, investments in AI, and optimizing the data market.

In the same year, the government launched AI4Gov, designed to encourage the adoption of AI technologies across government services, such as automating processes like topographical object recognition, text transcription, and anomaly detection. Luxembourg has also invested in MeluXina, one of Europe’s greenest and most powerful supercomputers, which is designed to support AI and high-performance computing projects in various sectors, including healthtech and research.

The Digital Luxembourg initiative integrates AI within the broader scope of the country’s digital transformation efforts, fostering public-private collaborations, research, and development of AI technologies across industries.

The state-spearheaded efforts to promote the use of these technologies have also resonated in the industry. According to the CSSF, around 30% of companies in the financial sector in Luxembourg used AI in their processes as early as 2021, proving the adoption of this technology in Luxembourg’s strategic sector has been wide.

 

The AI Act

Recognizing the transformative potential and inherent risks of the AI technology, the European legislators approved Regulation (EU) 2024/1689, commonly known as the AI Act. The AI Act entered into force on 1 August 2024 and provides for a gradual application of its provisions, with the majority of the new legislation taking full effect on 2 August 2026.

This landmark piece of legislation establishes a comprehensive regulatory framework for AI, setting stringent rules and guidelines to ensure AI systems are safe, transparent, and respect fundamental rights, while promoting innovation.

In a nutshell, the AI Act applies to users and providers of AI systems in the EU. Providers of AI are entities that develop or supply AI systems and bear the primary responsibility for ensuring that their AI systems comply with safety, transparency, and ethical standards as mandated by the AI Act. They must conduct rigorous risk assessments and maintain robust documentation to demonstrate compliance. Users of AI, on the other hand, are entities that employ AI systems in their operations but don’t necessarily develop them. Asset managers typically fall into this latter category, as they often use AI tools and platforms developed by third-party providers. Users are responsible for ensuring that the AI systems they deploy are used in accordance with applicable regulations and that they don’t pose undue risks to clients or the market.

The AI Act is first and foremost an EU legislation that binds EU-based entities. However, like the General Data Protection Regulation (GDPR), the AI Act has an extraterritorial reach in certain circumstances. The AI Act applies to providers based outside the EU which place AI systems or general-purpose AI (GPAI) models on the EU market, or “put into service” in the EU. Importantly, the AI Act also applies to both providers and deployers/users to the extent that the “output” of the AI system is “used in the EU”. Although “output” is not currently defined by the AI Act, nor is it clear how this term will be interpreted, it can be anticipated that this concept would encapsule any final result produced by an AI system (eg a report or a market forecast).

The AI Act adopts a tier risk-based classification system, with corresponding obligations and restrictions on users or providers depending on the level of risk as assessed by the EU. Certain AI systems considered as having unacceptable risk (such as those which pose potential threats to fundamental rights and democracy) are prohibited, while a considerable amount falls into the minimal and limited risk categories. The AI Act allows the unrestricted use of minimal-risk AI in the EU, such as enabled video games or spam filters. Limited risk refers to AI systems such as chatbots.

The core of the AI Act is on “high risk” AI systems. High-risk AI systems are typically found in areas such as biometric identification systems and management and operation of critical infrastructure. The enforcement of the AI Act, while primarily undertaken by the competent enforcement authorities of each member state will be coordinated by the European Artificial Intelligence Board (EAIB), specifically created for this purpose by the European Commission. In Luxembourg, recently published draft legislation designates the Luxembourg National Data Protection Commission (Commission Nationale pour la Protection des Données – CNPD) as the relevant national market surveillance authority pursuant to the AI Act.

 

The implications

Asset managers should prepare for the AI Act by taking several proactive steps. They would need to compile a list of all existing or planned AI systems used in their operations and evaluate whether any of them are covered under the AI Act. It’s crucial they analyse and classify the relevant AI systems based on their risk level and identify the necessary compliance requirements for each type of AI-system used under the AI Act, if any. Although it would not generally be the case, asset managers using high-risk systems (eg to evaluate the creditworthiness of natural persons or establish their credit score in the context of KYC assessments) would have to comply with a plethora of regulatory requirements, including taking appropriate technical and organisational measures to ensure they use such systems in accordance with their instructions for use. Failure to comply may result in the imposition of penalties or even civil liability for damages caused by the AI system’s actions.

It’s important that asset managers integrate the use of AI in the information and communication technology (ICT) risk management and compliance policy. And they should also have a strong policy to meet their obligations in the ICT area under other pieces of legislation such as the GDPR or the Digital Operational Resilience Act.

Asset managers need to ensure data governance and managerial best practices are in place before running their AI system. For example, they need to ensure that the AI systems they currently use in their business operations, such as in the context of automated trading or provision of investment advice, comply with any related regulatory obligations under MiFID. Asset managers may also need to ensure that AI systems used in their fund management operations comply with any UCITS/ AIFMD obligations, and ensure appropriate outsourcing and delegate oversight, where delegates use AI systems.

 

Threat or opportunity?

AI is here to stay and will flood all sectors of the economy. So, keeping pace with the rapid evolution of AI is paramount for the actors in the financial industry. Integrating AI could enhance operational efficiencies, improve decision-making, and redefine competitive advantages in the global investment funds industry. However, the challenges posed by regulatory frameworks – like the AI Act in Europe – will require asset managers to adapt their approach and implement compliance strategies to ensure a robust and trustworthy use of these technologies.

Rather than viewing the AI Act as a threat, asset managers should consider it an opportunity to enhance their practices and improve investor trust. Good practices, such as transparency in AI decision-making processes and rigorous testing of AI systems, can lead to more robust risk management strategies. In the long run, integrating these practices can lead to more sustainable growth for asset managers, aligning with broader trends toward responsible investment and corporate social responsibility in the context of AI.2

*This article was previously published in the AGEFI Luxembourg.


1McKinsey & Company, “The economic potential of generative AI: The next productivity frontier” (2023) and McKinsey & Company, “Capturing the full value of generative AI in banking” (2023).
2Financial Times, “Harness the power of AI to tackle financial crime” and Financial Times – Partner Content by EXL, “No single road: why harnessing AI is a nuanced business”.
Print