Human in the Machine – Regulation of AI in Finance

Share on facebook
Share on linkedin
Share on whatsapp
Share on twitter
Share on email
Share on facebook
Share on linkedin
Share on whatsapp
Share on twitter
Share on email

This blog is based on “Artificial Intelligence in Finance: Putting the Human in the loop”, a new paper by Dirk A. Zetzsche (the University of Luxembourg), Douglas Arner (the University of Hong Kong), Ross Buckley (the University of New South Wales), and Brian W. Tang (the University of Hong Kong). 

This is first of the new CFTE Academic Papers Series. In this new series, we curate and select some of the world’s leading research on the topic of finance, entrepreneurship and technology. 

The full version of the paper is available here.


Financial services have always integrated technological innovation, yet the most recent wave of financial technology (FinTech) has seen unprecedented growth. The rapid developments in data, storage, communication, computing power and analytics, have led to increasingly ubiquitous Artificial Intelligence (AI), which is particularly visible in the financial industry. Today it is the most globalised, digitised and datafied segment of the world’s economy.

AI encompasses a range of forms, but it is the use of unsupervised learning algorithms, such as reinforcement learning, that are making humans increasingly redundant in the world of finance. Algorithms have been used mainly on the front- or back-end of processes like operations and risk management, payments and infrastructure, data security, and compliance as well as customer-related services. More use cases are appearing as the technology develops, promoting new efficiencies and delivering new kinds of value, like the algorithmic trading that is now used in around 70% of total orders. The trend is continuing and according to recent reports two-thirds of UK financial institutions use machine learning in some form, while 89% of financial institutions in Hong Kong have or plan to adopt AI.

Ai in Financial Services
AI – imperfect invention

Nevertheless, progress comes with risks. One of the most obvious concerns data, particularly its dependency, availability and interdependency. Data collection is expensive, and making sure it’s relevant, unbiased, and of high quality is very difficult. Likewise, uncoordinated decisions of algorithms have previously led to extreme volatility, but potential tacit collusion of self-learning algorithms also has to be addressed by the regulators. The network effects and scalability of AI have undoubtedly increased the complexity of the financial system and led to additional dependencies which could have systemic effects and thus challenge the financial stability and sovereignty of even developed economies. The threat could also come from malicious actors that are getting increasingly adept at using the technology to attack, manipulate or harm national security through economic backdoors.

The inevitability of these and other risks have led leading institutions to agree on value-based principles for responsible AI. Regulatory bodies in Europe and Asia have also proposed their own approaches. These include authorisation of AI, including the potential introduction of licensing requirements or mandatory insurance schemes, improved qualifications of core personnel and improvement of legal frameworks around the outsourcing of AI and its role in key functions of financial institutions.  

The role of the human factor

To ensure AI is deployed in a responsible manner, the role of humans in the machine has to be taken into account. Until regulators can effectively monitor automated systems, it is important to put in place personal responsibility. This will ensure sufficient due diligence and explainability standards in case any issues arise. The consequence of this being that it will require an adjusted responsibility framework for senior management including a system to address issues arising from AI in finance, particularly in terms of information asymmetry, data dependency and interdependency.

AI is emerging to be a critical driver in financial services, but until its impact is better understood, which is constrained by the inability to control AI internally, over-deterrence and growth of FinTech startups, the presence of humans in the loop to control the AI remains paramount.


This is part of CFTE Academic Papers Series. We curate and select some of the world’s leading research on the topic of finance and technology. The full versions of our papers are accessible here.

Do you want to learn more about AI in Finance? Enrol into our CPD accredited course here.

A post was written by Polina Levyant, Research Analyst at CFTE.


Learn the skills of Fintech

Learn the skills of Fintech

More To Explore