Image source:

“Algorithms and Government” The (digital) headlines were full of it recently. These relatively new phenomena in the public sector are already more mature in other sectors. What can we learn from this?

The financial world

Banks and insurers have relied on models of all kinds for years. This is because reality is far too complex to understand properly. Models, by definition, are simplifications of reality. British statistician George Box said as early as 1976, “Essentially all models are wrong, but some are useful.”

Essentially, all models are wrong, but some are useful. – George Box

Since banks and insurers already have a long history of using forecasting models, it is interesting what the government and other sectors can learn from them. In Financial Risk Management, for example, you can recognize three lines of defense:

  1. First line of defense: modelers running their own tests.
  2. Second line of defense: independent validation of the model by, for example, a Model Validation department
  3. Third line of defense: audit of the overall model development, testing and validation process.

When I myself began my career as an entry-level actuary more than 15 years ago, I also had similar discussions in the gray area of differentiation and discrimination. Do you want to differentiate your premiums based on factors such as gender and age, for example? There is no easy answer to that. It does start with understanding the issue. So other sectors can learn a lot from the experience from the financial world that has been dealing with this axe for years.

Technical developments

Technology has not stood still in recent years either, and on this front there are all sorts of new tools available, one of which I would like to mention in particular: LIME (Local Interpretable Model-agnostic Explanations). Consequently, this piece is somewhat more technical in nature. The problem with the explainability of models is that it is very difficult to establish the decision boundary of a model in an understandable way. LIME is a method that attempts to improve model transparency.

To increase confidence in our model we use cross-validation, among other things. By using a different part of the dataset for the model each time and testing it on the remaining data, we can gain confidence in the stability of the model parameters. In addition, one leaves a separate validation dataset which is only used to test the model again, independently. This methodology provides a picture of model performance over unknown data.

However, this does not help to understand why some specific predictions are correct and others are not. Especially in the case of “black box” models, this is not easy to explain. This is where LIME comes in, distilling an explanation of why this particular observation resulted in the prediction. What factors were decisive and how sensitive is the model to small deviations? LIME so. For fans:

Data science with purpose

Data science is a means, not an end in itself. So always ask why we do something? What the effects of something are and see if you can get some insight into them. Are these effects also desirable? What can you do to avoid negative effects? Looking beyond the model, for us that is “data science with purpose” .

Questions about the latest news, events and PR?

Dominique can tell you everything about our organization, mission and vision. She would love to get in touch with you!

Dominique Ladage
Marketing Manager