What on earth have I been doing for the last three years?

Matt Gosden
7 min readFeb 17, 2023

--

I’ve ended up taking a quite unconventional path since the start of the pandemic. Some people think I am crazy and sometimes I wonder if I am too. So what have I been doing, and how do I feel about it three years on?

Firstly, cover the basics

Nothing much can happen in life if the basics are not covered. Through my amazing wife’s career, together with a portfolio of commercial things I have been doing since 2020, we have been keeping the show on the road at home. It’s mainly her contribution to be honest. As for mine:

  • Firstly, the smart people at Quantco have been developing analytics and data solutions for some of the largest European Insurers. I’ve loved working with them as a senior advisor, supporting their work with clients on the life insurance side.
  • Secondly, the financial markets have certainly been rocky since Covid, but perhaps surprisingly this volatility has been helpful for many algorithmic traders. My trading bots started off as a Python coding exercise for me in 2018 to build out some micro-service applications. But I have been surprised that they seem to break the ‘strong efficient market hypothesis’ and deliver a decent risk-adjusted return through the cycle. Long may it last.

Then the passion …

Since leaving my exec role at Zurich in 2017, I have rediscovered my huge passion, creativity and skillset for technology. You know it is a passion when you seem to spend all your time learning, reading, researching, prototyping, creating, contributing, and explaining something. And this is what I spend the majority of my time on. I’ve done my 10,000 hours.

So much has changed since I was last writing code and messing with computers in the 1990s. But for various reasons I ended up spending most of my career in finance, actuarial, management consulting and then business management roles. I didn’t follow my original physics, engineering and technology path.

In the 2020s, things are so good for technologists. The frameworks and tools are really good and make coding often a pleasure now rather than a chore as it used to be. The open source community which I’m heavily involved with now is excellent and hugely dynamic.

And then there is the data science and machine learning / AI paradigm that has sprung up in the last decade. This brings together even more of the elements I loved, and also forgot that I was quite good at. Machine learning / AI has got the logic and nice structure of software, but requires some pretty heavy statistics and mathematics. And interestingly, the cutting edge of AI research is now drawing on many concepts from physics that I learnt and forgot on my original path in 90s — Shannon’s entropy theories, energy models. I’m even seeing some strange quantum-like behaviours in the algorithms I am currently working on. So for someone like me, it’s easy to see how you can get passionate about it. And the space is moving at a huge pace, so it is exciting and there are lots of opportunities if you can keep up.

But whilst I love the tech, technology for technology’s sake is actually quite boring. Being an engineer by birth, as I think my mother would confirm, means the joy is in creating a solution to a real problem. Ask a developer how best to learn new tech, and they will often say ‘find a cool project to work on’.

The best way to learn and develop is by applying technology to a use case you really want to solve. So what’s the problem we want to solve?

Automating financial guidance

My long-time collaborator, Peter Robertson, and I have been talking for nearly ten years about financial guidance and asking ourselves …

Surely the technology is getting good enough so that we could automate financial guidance and make it available to the mass market?

Being able to automate much of financial guidance would be hugely valuable for society and also for business. Today most consumers can’t afford financial advice as it is very expensive to deliver with a lot of human intervention. It’s hugely regulated too, for obvious reasons.

But mass-market consumers have guidance needs which are often simpler but do require them to understand the complex rules that apply to them and what the impact of their options will be. If consumers have the time, inclination and confidence, then there are many great sources of information they can read to do their own research and make their own decisions. But data shows that it is a small minority (perhaps 10% of people) that do their own research. What about the rest of the population? This has been our exam question …

Can we create a machine that can guide a consumer through what is relevant for them in a financial guidance journey and help them make an informed decision?

So is AI the solution? I think it is but not the type of AI the hype is focused on. Mainstream AI such as ChatGPT learns language patterns, and therefore learns content implicitly, from ‘big data’ out on the internet. On financial guidance topics there are thousands of blog posts about different financial topics, web pages, guides and articles. So it does have some implicit knowledge it can use in its responses. But our experiments from last summer showed that InstructGPT (before ChatGPT took all the press attention) get the outcomes wrong very often indeed. These Large Language Model systems have absolutely no guardrails that look to the ground-truth of the domain knowledge. Contrast that with a human guidance expert that would have to learn the content explicitly as part of their training. So today’s Large Language Models such as ChatGPT are not the solution.

What we need to add are elements of GOFA (good old fashioned AI) or what were sometimes called ‘Expert Systems’ back in the day. It will be a huge help if our systems can explicitly learn the domain expertise and therefore provide strong guardrails. It also then allows us to be explicit on ‘why’ a customer was shown particular information or taken down a particular path. In systems that offer guidance to the vulnerable, we must assure ourselves, our clients and our regulators that the system does not hallucinate and make up responses as ChatGPT and others do.

Over the last three years, the tech capabilies have moved on a lot. And we iterated a number of methods to do this and have come quite far already.

In 2021 we already deployed some tools and technology that could offer guidance using some new Monte-Carlo methods blended with some methods from machine learning decision trees. This delivers an intelligent journey for customers as they try to navigate guidance on topics where there are a lot of rules (e.g. tax rules, pension rules, etc.).

So this can be used already to create smarter financial calculators than are out there on the internet. But in exploring how to commercialise, it seemed quite clear that this was probably too narrow as a commercial solution. It was really hard to get organisations to commit. It was Covid also so procurement processes were all changing. We also heard the feedback that organisations often have data they want to learn from too in the use cases that matter most to them.

So since summer 2022, I have been researching and working on a more AI-centric version of our vision that could codify domain expertise from rules AND data. Interestingly this can then be used also to train digital agents that guide consumers through a variety of different journeys in an optimal and safe way. So it looks to have huge potential IF the technology solution can deliver what it promises.

I’ve ended up very deep in some AI research rabbit-holes, engaging with some facinating up to date AI research working Causal Models, Graph Neural Networks, Synthetic Data creation and others. It’s a great community and people are very generous with their ideas and content to publish so much open-source. But I was never to be a career academic — I care more about the solution than the theory and the rigorous proof.

Of course most of these rabbit-holes have ended up being dead ends. But since November last year I’ve been treading a path that seems to be working. Interestingly, the solution draws a lot from the same mathematics that underlies the GPT models, but uses it in a different way and with completely different data sources.

So it’s looking very promising for this latest more powerful way of automating domain expertise. But I have already experienced the false summit fallacy a few times now where it looks like the summit is near, only to find that there is a bigger mountain hidden further head to climb.

How’s my Ikigai?

So on reflection — how is my Ikigai barometer doing?

If you are not familiar with it, Ikigai is a Japanese philosophy around happiness and reason for being that sees perfect balance in achieving four key things at the same time:

  • Doing what you love.
  • Doing what you are good at.
  • Doing what the world needs.
  • Doing what pays the bills.

I love this simple framework.

Three of the circles are bright green spots compared to some of my career choices and that is hugely motivating. But the fourth, the ‘being paid for’ circle needs some attention. So from this it’s pretty clear what I should be focusing on next, isn’t it …

Ikigai diagram — Source Toronto Star

If you are interested in finding out more then please get in touch with me via https://engagesmarter.co/

In particular I would be interested to hear from:

  • If you are an organisation with a use case you would like to discuss
  • If you are a business or investor that also sees the potential in this and can help fund and incubate it
  • If you are passionate about what you see here and would like to join the team if / when we can afford one
  • If you are an actuary or a young technologist who wants to get more into that practitioner side of data science and AI and would like some advice

--

--