#Ambition
Ambition
of the mission

GLMs are deficient in their ability to capture non-linear interactions between disaster factors. Hence the interest for actuaries to look for efficient AI models that allow them to improve predictions. These models are also becoming less and less transparent. However, transparency and the absence of bias within predictive algorithms are a guarantee of adoption.
#Method
Our
approach

Think
Performance & Explainability
Developing a 100% transparent and explainable insurance premium prediction model raises many questions: How does AI & machine learning increase pricing performance? How to justify the recommendations suggested by a machine learning algorithm and how to make it more explicit?
Make
Explainable predictive models
Actuaries have adopted this explainability approach based on a concrete example and its comparison with conventional methods. The pedagogy on the solutions put in place, the performance level of the insurance premium prediction and its detailed control of the decisions made of the algorithm allow the team to gain confidence in the deployment of these practices in future AI projects for the insurer.
Scale
New fields of application & autonomy
Beyond an explained model, our client has a concrete example of implementing explainability for prediction algorithms. Other projects could benefit from this approach to reduce bias and provide a high level of trust in AI models.
#Benefits
Indicators
of success
