Communication and the Use of LLMs in Motorsports

In 1999, during my tenure as Head of R&D at Arrows Grand Prix, I embarked on a journey to enhance the performance of our F1 cars. At that juncture, genetic algorithms emerged as the preferred tool, particularly in fast trading software. Despite operating on Silicon Graphics visual workstations, which were relatively powerful but nothing like what is available nowadays, we could conduct many runs. We employed these algorithms primarily to analyse tyre models and seek other optimisation strategies. Interestingly, the learning often highlighted inaccuracies within our models. For instance, a particular simulation model would consistently suggest a 100% front weight on the car, indicating an error in the model that may have caught out a few teams over the years with unrealistic targets! These were the early days of using computers with limited power and brute force algorithms.


When I moved to McLaren F1, I was astounded by the wealth of “embedded knowledge” the team possessed, a term used in MBA circles to mean that they’ve meticulously documented everything. To make this vast reservoir of knowledge more easily accessible to new engineers, I considered implementing an “on-prem” Google server. The idea was that little nuggets of wisdom only known to a few people could be shared across the company. The idea is that knowledge, when harnessed collaboratively, can be greater than the sum of its parts. Now, imagine if we could enhance the learning model of ChatGPT by incorporating this internal knowledge derived from over three decades of racing expertise into speeding up the dissemination of knowledge and ideas (the first version of this article was prepared in May 2023, MS Co-Pilot and Google’s Bard are solving precisely this at the moment).


Fast forward to the Super Aguri F1 team. Our radios were not at the level of the other teams, so we decided to experiment with increasing the quality of our communication with the drivers. We agreed that one solution was to move the pitwall engineering (prat perch) to the air-conditioned, quiet, controlled environment of the engineering truck behind the pits. This increased the communication quality and was a precursor to thinking more about the now standard “Mission Controls” back at base in F1 and FE, where engineers can work in a quieter controlled environment. This decoupling of tasks is a powerful concept. Obviously, “you can’t hammer a nail over the internet,” so specific tasks need to stay on the ground, at the track, but many jobs can be done remotely.


The result was the now famous run in between Anthony Davidson and a beaver while running 3rd at the Montreal Grand Prix in 2007. With engineers positioned in the truck behind the pit garages and Anthony having to dive into the pits at the last minute, the mechanics were left surprised as the TV talked about him coming into the pitlane! An example of the right intent, but not the right outcome!

Another solution to our communication problem was to think about texting. I saw the Technical Director of F1, Charlie Whiting, in Monaco on the morning of the F1 race to discuss solutions. Car communications are restricted to radio, but I argued that if our driver had had a hearing impairment, that would not be entirely fair, hence the need for text-based communications. Charlie agreed to look at a proposal. We never did implement the concept due to the requirement to redesign the steering wheel, but I have continued to think about communications and ideas that might solve problems.


Since I started looking at genetic algorithms in 1999, rapid advances in computing power, including GPUs and TPUs, enabled machine learning to evolve significantly. This computational growth allowed for training complex models on large datasets, leading to powerful AI like GPT-3 and GPT-4 and ushering in a new era of AI innovation. The astonishing rise of ChatGPT and large language models or LLMs is the latest thing and is changing by the week, if not the day at the moment (this article will be out of date by the time you read it potentially, first written May 2023!).


Could the LLM be trained on a smaller data set and more clearly communicate with a driver? My current understanding of these LLMs is that the better “prompt engineering” fed into the model, with the best context, the better the answers and the more concise the results. If you notice that when you type in a question to Bing now, it first makes the prompt more straightforward, then feeds it to the model. The more you narrow down the context, the better the answer. Here’s a simple example:
Me on the prompt line: “Please write a concise radio communication for an F1 driver with bad radio quality to ask them to come into the pit lane for new tyres.”


ChatGPT4: “Box, box, box. Tyres ready. Confirm, over.”


A silly, small example, but it shows how it can be used. I have seen many times in the heat of battle where we engineers make mistakes. The more scenario planning and fast decision-making possible, the fewer mistakes.
For example, ChatGPT could prepare radio communications and pop-up ideas for a race engineer based on preconceived knowledge from listening to “Mission Control” conversations or info coming from the TV!


These are only simple examples, and every day, I am sure you will all think of more. And by the time this article comes out, more API integrations, private learning model implementation and a host of tools have become available. I will watch with interest how this all begins to play out and would love to hear any ideas from engineers!


PS: this article was written with the aid of ChatGPT4

Published by markandrewpreston

Demonstrating his exceptional skills in engineering and design, alongside evident drive and business acumen, Mark Preston went from a degree in Mechanical Engineering and working for GM and Spectrum Racing Cars, to obtaining key roles in both the Arrows and McLaren F1 teams, and then creating the Super Aguri F1 team in just 100 days. Now the Team Principal of Team Aguri Formula E racing in the second season of the FIA Formula E championship. Completing his MBA at Oxford in 2006, Mark has also worked with Oxford University researchers who’ve benefited from his expertise in composites and high-tech design: he has consulted in commercializing spin outs from their research in marine energy and electric motor technology. Such developments of his management skills in and beyond the realm of motorsports shows Mark to be an exceptional team player and innovator, ideally placed to manage and motivate workforces. Together, his skills and experience combine to ensure that Mark delivers first-class business planning and start-up advice covering technical due diligence, operations management, and more – offering as he does a unique balance of commercial and technical understanding, achievement and ability.

%d