Loading
0 Follower
0 Following
glep
Guillaume Lepage

Location

CA

Badges

1
0
0

Activity

Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Airborne Object Tracking Challenge

Latest submissions

No submissions made in this challenge.

Machine Learning for detection of early onset of Alzheimers

Latest submissions

No submissions made in this challenge.

Play in a realistic insurance market, compete for profit!

Latest submissions

See All
graded 125518
graded 125077
graded 124812
Participant Rating
Participant Rating
glep has not joined any teams yet...

Insurance pricing game

3rd place solution

About 3 years ago

Nice, thanks for sharing.
I like your structure a lot.
Using models prediction (with no claim history) as an offset as you did is a clever way to use the same model for everyone, renewal and new businesses.

Thinking about it, I wonder. xgboost has a smart way to deal with NA, for each new branch, the NA can go either way optimizing the cost function. So the NA are used in the model. Knowing that, for your data with no claim info, were the predictions from (2) * (3) identical to predictions from (4)? Or did they differ by a multiplicative constant?

I also checked my residuals for manufacture_year and the difference between -11 and -12 is surprisingly stable. I mean, exposures are still relatively low, so it could be random, but it is intriguing! I wonder if we observe the same for the 40K policies not in training data (probably since your models performed well!)

Talking about these policies, I would have loved to have a different feedback for new businesses and renewals.

This game is addictive, even after it’s over, we have a lot of unanswered questions!

Congrats again, thanks for sharing.

2nd place solution

About 3 years ago

@simon_coulombe forbade me to use the sharing thread (the last message was perfect, it’s better to let it end with that) so let’s create a new thread.

So, I’m second. I will not lie, it was a pleasant surprise. I was 4th in week 10 leaderboard, so I began to hope a little, but when no message came to prepare for the townhall, I accepted it!

Then, I saw the top-participant talks during the zoom and was really impressed.
My reaction was β€œoh that is awesome, no wonder I could not beat those guys. Well done!”.

Then results came and I was pleasantly surprised to see my name in it!

My solution is probably less sophisticated that what we saw today during townhall.
It was simple by design, so I can probably claim a little credit but I guess that I probably got lucky.

So let’s add my two cents to the conversation.

KPI and model selection

Like most participants, I did not look at the RMSE leaderboard, I merely looked at tweedie deviance.

My main tool to decide if a model was good what to replicate the competition framework with different models. (like @michael_bordeleau did I think).
Of course, it is impossible to replicate the variety of all participant’s model but I tried a few simple models and algorithms that I did not want to use but just to compete with my models.
Then, I simulated a perfect market like in the competition and looked at the best loss-ratios. Not profit, since that depends too much on the chosen profit-margin. I also did a lot of simulations where the profit-margin was random to help me decide the model and the margin.

Chosen models

About models, like I said I created a lot of them, mostly xgboost and GAMs (tweedie, freq/sev, by-coverage model, …).
I thought about ensembling them but discarded the idea by fear of not subscribing enough. On my simulation, my ensembled model had really low market share (even after removing the models used in the ensemble) and I thought that you need around 15% to perform well.

I was obviously wrong since the top participants shared mostly stacked solutions!

Like some other participants, I used a different model for the renewal (the 60K risks in the training data) and for the new businesses.

The renewal model was evaluated on a train/test (year 1-3 for train, year 4 for test), with past claims features.

The new business model was evaluated on a 5 fold cross validation framework.

Final renewal model is a xgboost freq+sev.
Final new business model is a tweedie-GAMM.

Feature Engineering and model loading

I did not spend much time on feature engineering (probably not enough), but I am really conservative about features in insurance. We have so few causal variables, most of them are merely correlated with causal variables. Even in telematics, the most important feature is often the frequency of harsh breaking events. I mean, we break to avoid crashes! So it’s definitively not causal. But breaking is correlated with the distance with the front car, the speed and other causal variables.
This lack of causal variables has been bothering me for some time now, I used to work in the Energy sector, we tried to forecast the electricity consumption. Our models were perfectly causal, when it’s cold, you need to heat your home and consume more.
In insurance, we love to use the credit score, it is very predictive, but it’s not causal at all and probably unfair to use.

Anyway, end of the parenthesis, but I would love to hear your thoughts about causality in insurance.

Back to my solution, the feature that gave me headaches is vh_make_model.
I tried target encoding, I tried embedding, I tried (and kept mixed models), but all these methods were too conservative and did not surcharge the high risk enough.

Then, around week 7 (I joined late and am not that smart), I realized that I do not care about getting a good estimate of all risks, I just don’t want to under price a risk.
So I added a post-model correction for two features, vh_make_model and city. I looked at the model residuals by modality, then applied a multiplicative correction with the ratio claim_amount / prediction. Of course, I want to be more conservative about discount that surcharge, so I ended with no discount (0 credibility for vehicle with less claims than expected) and full surcharge (full credibility for vehicles with more claims than expected).

Doing that, I knew that there was a significant portion of the portfolio that I could not subscribe, so I reduced my margin for the others to still get around my target of 15% (I ended with 19%).
For week 10, I was 4th, with only 5%, I felt it would not be enough, hence the margin reduction.

Conclusion

Like I said, my solution is really simple, the asymmetric correction for vh_make_model and city is a very simple idea but it seems to have made a huge difference performance-wise.
Even my rank 157 in week 9 was not that bad.
I submitted a first model with no margin, realized it immediately, and resubmitted the correction. Unfortunately, the used solution was the one without any margin, not a good idea!

That’s it for me, if someone wants to look at the code, it is now public here

Before leaving I want to thank @alfarzan (and crew) for this awesome competition. That is my first kaggle-like competition. I usually don’t like it because building a model to perform well for a very particular KPI does not make much sense to me (very personnal opinion here, I’m probably wrong again!). Here, we had no KPI but a game theory framework where we needed others to evaluate our own models.
So thank you very much Ali, you must have spent a lot of hours on the competition and you were always cheerfull and helpfull, really well done, congrats.

I must also thank the very ugly @simon_coulombe and @demarsylvain who dragged me in this competition. What a joy to code insurance pricing during the day and then do the exact same thing at night on my hobby hours, I was really pissed at them but eventually, I learned a lot, so thanks guys.

Thanks to all the participant who shared on the forum. I’m not good for that so I did not participate much but it was awesome to read.

Another thing that I loved in this competition was the imposed code format. It was really neat and forced all of us to write clean code. I will certainly keep good practices acquired during this competition. One of them is to encode everything in the model, the feature-engineering (with recipes or other), the model, the corrections.

Finally, for the R-users who are not familiar with package targets, it is really a secret weapon when you want to build complex simulation, everything is so easy with this tool. For those who are not familiar with this package from Will Landau, go check it, start using it, you won’t regret it. It saves time, it saves energy and most importantly, it saves mental workload.

That’s it for me, thanks again and congrats to WorldExperts team. They made more profit that solution #2 and #3 combined, respect!

R: package loading order problem (solved with conflicted)

About 3 years ago

Hi all,

Maybe it was mentioned before, I got a weird bug on a submission. Everything was great when I ran it locally but it crashed during submission.

It seems that the packages are not necessary loaded in the order specified on model.R.

In my case, it caused a problem since the stats package was loaded last (which is odd since I did not use it). Anyway, that meant that stats::filter: masked dplyr::filter which was not what I wanted…

The solution is the package conflicted.

To be sure that this kind of problem don’t happen, load all your packages, run conflicted::conflict_scout() and add steps like conflicted::conflict_prefer("filter", "dplyr").

I mean, we should do it on every projects, but in this case, it is particularly useful.

Hope this helps.

glep has not provided any information yet.