
Back to the roots part I
Artificial intelligence affects us all. We now know how people in previous eras with extremely fast technological change might have felt.
Having experienced the development in machine learning first hand for the last 30 plus years makes me want to pause for a moment and reflect. I want to share a speed run through my career timeline and tell you why the current time feels like going back to the beginning where the promise of computers that could generalise felt exciting and new.
The world of software, mathematics and theoretical sciences is definitely changing. Soon machines orchestrated by humans will solve some of the open problems that scientists have struggled with. Soon we can all write (buggy) software with any feature you might desire. Revisiting my roots gives me inspiration about what to do next in this world of almost infinite possibilities.
Artificial neural networks in the 1990s
Let us take the time machine back to the early 1990s(!) where I studied physics at University of Copenhagen. A lot of physicists back in the day were interested in artificial neural networks. That captured my interest more than the more fashionable theoretical particle physics and I started as a researcher analysing neural networks using the techniques that later won Georgio Parisi the Nobel prize.
I wrote my first implementation of error back-propagation—the algorithm that allows artificial neural networks to memorise training data—more than 30 years ago on my friend Bjørn’s very expensive stationary PC. We studied physics together and I rented a room in Bjørn’s apartment so I could check up on the training progress often by visiting Bjørn’s room next to mine. It was a fast computer back in the day, but neural networks run on computers that follow Moore’s law, so today I can run that neural network approximately 2^32/2 = 2^16 = 65536 times faster. Here I assume that 32 years have passed and the processor speed doubles every 2 years.
Winter is coming
What I did not know back then was that the machine learning field was about to enter a so-called AI winter. There was not much more we could do with neural networks. They became a small applied engineering discipline and most machine learning researchers, myself included, moved on to kernel methods like support vector machines and Gaussian processes and methods development for Bayesian nonparametrics, variational methods and other topics.
In part II I will explain how the AI winter ended and how I reluctantly joined the deep learning revolution to presently become an agentic coding evangelist.
Here are some related articles you might find interesting
Type
Article
Published
May 8, 2026