- Publication date
- 1 July 2021
Opening remarks by Ana Teresa Moutinho, Head Of Supervisory Processes Department at the Special Committee on Artificial Intelligence in a Digital Age and Econ Committee, European Parliament
Data processing has historically been at the very core of the insurance business, which is rooted strongly in data-led statistical analysis. Mathematical models to process data have always been used to inform underwriting decisions, price policies, settle claims and prevent fraud. There has been a pursuit of more granular datasets and predictive models, such that the relevance of Big Data Analytics for the sector is no surprise. Additionally, the Covid-19 pandemic seems to have accelerated the adoption of artificial intelligence, including throughout the insurance value-chain.
In 2019, EIOPA launched a thematic review on the use of Big Data Analytics specifically by insurance undertakings or intermediaries. We found that Big Data Analytics tools such as artificial intelligence or machine learning were already in use by 31% of the participating companies, and another 24% were at a proof of concept stage.
Artificial intelligence systems are used by insurance undertakings in all stages of the insurance value chain. They are increasingly used within insurance to process new and old datasets to underwrite risks and price insurance products or to launch targeted marketing campaigns and cross-sale insurance products. Artificial intelligence systems are also increasingly used to process claims more timely and fight against fraud more efficiently.
The benefits arising from artificial intelligence in terms of prediction, accuracy, automation, design of new products and services or cost reduction are remarkable. However, there are also growing concerns about the impact that the increasing adoption of artificial intelligence could have on the financial inclusion of groups of protected classes or vulnerable consumers or on our society as a whole. Some of these risks are not new, but their significance is amplified in the context of Big Data Analytics.
This is particularly the case regarding ethical issues, in relation to the fairness of data use, as well as regarding the explainability and transparency to the so-called “black-box artificial intelligence systems”. While the current legislative framework already caters for artificial intelligence applications to insurance activities (e.g. the Solvency II Directive, the Insurance Distribution Directive, the General Data Protection Regulation and the upcoming e-privacy Directive), an ethical use of data and digital technologies requires a more extensive approach than merely complying with legal provisions and needs to take into consideration the provision of public good to society as part of the corporate social responsibility of undertakings.
Therefore, addressing digital ethics for the insurance industry is a necessary task. The operation of the insurance market has important economic and welfare functions for the wider society and it can generate both positive and negative externalities. In terms of social inclusion, life, health and non-life insurance lines all play an important role.
Against this background, in October 2019 EIOPA convened a consultative Expert Group on Digital Ethics, to allow a wide range of stakeholders to work together on identifying opportunities and risks associated with the growing use of artificial intelligence in insurance, including exploring possible limitations that might be needed.
The Expert Group on Digital Ethics has developed six governance principles to promote an ethical and trustworthy artificial intelligence in the European insurance sector. The high-level principles are accompanied by additional guidance for insurance undertakings on how to implement them in practice throughout the artificial intelligence system’s lifecycle. This work has been published last month on our website.
In a nutshell, the principles developed address issues concerning:
- Proportionality: conduct an artificial intelligence use case impact assessment in order to determine the governance measures required for a specific artificial intelligence use case;
- Fairness and non-discrimination: adherence to principles of fairness and non-discrimination when using artificial intelligence;
- Transparency and explainability: different types of explanations to specific artificial intelligence use cases and to different recipient stakeholders;
- Human oversight: adequate levels of human oversight throughout the artificial intelligence system’s life cycle;
- Data governance of record keeping: provisions included in national and European data protection laws (e.g. GDPR) should be the basis for the implementation of sound data governance adapted to specific artificial intelligence use cases;
- Robustness and performance: use of robust systems, both when developed in-house or outsourced to third parties, taking into account their intended use and the potential to cause harm.
Most of the principles I have just mentioned are also covered by the Commission’s legislative proposal on artificial intelligence, which we welcome and support.
EIOPA expects the work of the Expert Group on Digital Ethics to help insurance undertakings to implement in a proportionate manner the right combination of governance measures that better adapts to their respective business models and more particularly to the concrete artificial intelligence use cases that they have implemented or aim to implement. We also trust that this work can further enhance trust in the use of artificial intelligence by insurance undertakings.