Artificial intelligence
What’s next for rail, and how can leaders benefit?
Sarah Shooter, Editorial Content Manager, RSSB
Sarah: Vaibhav, can you set the scene of where we’re at with AI regulation in the UK rail industry and how its evolving?
Vaibhav: We are in an interesting space. AI application and use is accelerating, development in technology is happening, and opportunities are growing. From a regulatory view in rail, the challenge is understanding the extent to which AI applications will require a reassessment of existing UK regulations. We also need to examine how product acceptance and deployment related assurance in this area is developing.
The EU Parliament ratified the EU AI Act in 2023 and came into force on 1 August 2024. It sets out regulations applying to categories of AI systems from limited to high risk to prohibited AI systems. It will apply after 6 months for prohibited AI systems, 12 months for General Purpose AI (GPAI) like ChatGPT, and 24 to 36 months for high-risk AI systems so is just round the corner.
The UK doesn’t have those same regulations in place. There was a white paper published in 2023 and, in contrast to the EU AI Act, it takes a context-specific risk and regulatory approach. The level of assurance is driven by use cases and specific risk context of the sector. The users, buyers, and developers are expected to assess what level of assurance they require for that AI product. The white paper and associated guidance highlight a range of assurance approaches. These include impact assessments, a third-party conformity assessment, compliance against standards, and more.
The level and nature of AI assurance is on a spectrum. You must assess both at the producer and the buyer of an AI product for what is proportionate and reasonable. There is an opportunity to help and support the sector in making informed choices, so we don't create barriers to innovation while ensuring essential aspects are addressed.
Sarah: And what does assurance look like for AI in the rail sector?
Vaibhav: AI is an algorithm that infers and makes predictions based on the data it is trained on. It might be used to infer the next word in a sentence, like ChatGPT, or look at an image and predict what category it belongs to, like a search engine. It makes these predictions based on statistics. Statistical models are generalising and inferring which response best fits the data. Because they generalise and then make a prediction, anomalies can be challenging. They are noise, as far as models are concerned. Part of the challenge of an AI application is dealing with that noise and being able to adapt to that noise and remain reasonably accurate.
Once the algorithm has made a prediction, you, as a human being, must decide the extent to which you will listen to it, and in what context can it be trusted. In the railway, functions and products are highly regulated already. So, the question becomes what is to be done about this element of inference which is part of decision making and is embedded in a function or a railway product? How do find a baseline for a product that has the potential to evolve? How do we keep track of what has changed and when?
RSSB created the capability of Futures Lab to anticipate and think about changes that have the potential to impact, transform, or disrupt the railway of the future, and what thinking can be done now to address emerging risks early. For example, what will an accident investigation process look like if an AI system leads to an incident? At this early stage of the journey, ‘explainability’ is a challenge. With most assurance mechanisms, you must be able to explain how and why decisions were made after the fact. If you are introducing a technology where you can’t fully explain how it went wrong, that has serious implications. Industry must be able to assure that the fundamental safety integrity of the railway system is not compromised.
In AI assurance for rail, we are fundamentally investigating four things: the machine learning component, the software, the data, and the human element. What are you going to do with the outputs of this machine learning, how does that relate to the function you’re trying to perform, and how important is that function? AI systems should be developed with a Human perspective at the heat of the technology. How will human factors aspects be addressed in the design and operation of AI powered systems so that these things are done with best practice in mind, is a key priority.
Sarah: Can you tell us what RSSB is doing to support industry in the AI space?
Vaibhav: The first step is to recognise our role and our objective. Our purpose is to encourage the use of AI and automation in the railway but safely, ethically, and responsibly. We want to create the groundwork such that there’s confidence in people selling AI products to the railway and people deploying and embedding them. Our specific role is as a safety, standards, and research body. So, one of the ways we can help is to provide a framework to enable people to think about the possibilities of AI deployment and understand when assurance is required. The explosion of AI is causing an asymmetry of knowledge between buyers and the sellers developing these algorithms. We need to equip rail organisations to ask the right questions.
RSSB’s Futures Lab and Research directorates are pooling their expertise to support the sector in Anticipating, Navigating, Enabling and Working (A NEW) with new and emerging developments and challenges such as AI. As we love an acronym in rail, the objective is to enable A NEW railway with the confidence to deploy AI-powered solutions. I am working closely with Luisa Moisio our Director of Research to steer RSSB’s progress in this area.
In terms of Anticipation, we are keeping an eye on the impact of new AI developments. That includes regulations, verification, and validation approaches, and creating awareness such as the EU AI act and the UK position on AI. To help Navigation, we are exploring ways to help the sector to navigate and identify applicable AI standards which are relevant to rail. To Enable, we are developing a framework so that rail companies adopting AI Develop can identify proportionate assurance approaches. To Work with AI, our Human Factors experts are understanding human factors implications around design, operation, and working with AI systems. That’s because at the heart of it, the railway is about people. It will change the way people work and perform their tasks so we need high-level good practice principles of how AI should be designed and developed in the real rail context.Sarah: Do you see AI as inevitable for all companies in the rail industry?
Vaibhav: The genie is out of the bottle. The technology exists and is developing rapidly, and its limitations are only those of the imagination. It’s possible that the hype around AI doesn’t come to fruition, but the technology is going to be with us one way or another. The question is really one of scaling and scope. How deep does it go into the railway system, how quickly does that happen, and how much can you scale up and apply AI across the whole system and at what speed? You need the mechanisms in place to be able to do those things, but at the same time, you must have a very clear human-centric strategy around ensuring how people are prepared for such systems to become part of life and work. If those things are done correctly, there’s enormous potential to enhance what we do and how the railway functions.
Sarah: How could a rail leader identify the areas in their company where AI can benefit them?
Vaibhav: There are perhaps three critical things. First, creating an environment for people to experiment, and experiment safely. I don’t think use cases will necessarily be driven from top down. They will emerge as people embrace the technology and find out ways in which they can work with AI and come up with new solutions. Give your people an environment to experiment with data and everything else in the in the local context and solve small, specific problems around them. We should not underestimate their power.
Second is the human capability factor. We really need to improve awareness and skills within our companies. If you allow people to experiment but they have limited capabilities, you may not progress far. Providing a framework that allows people to come up with ideas, test them, and get funding and support will be incredibly powerful.Third, there will be areas where, as a leader, you will think, ‘AI can be used here’ and you will champion progress. In those cases, the ability to work with organisations like ours, the DfT, and the regulators will be very helpful. This combination of leadership and opportunity creates a culture of innovation.
Imagine lots of companies are doing similar experimentation, and one of them finds a perfect way of doing something. How do we transfer that knowledge from within companies and within individuals back to the sector? This is what will get us closer to transforming the way the railway fundamentally functions rather than just at the edges. At RSSB, we can play a very important role here, as an independent body, in ensuring those use cases get proper airtime and effective collaboration occurs at an industry level.
Sarah: What are RSSB’s next steps on the AI journey, and how can people take part?
Vaibhav: We’re asking: ‘Do we really understand the EU AI Act, regulatory developments in the UK and new approaches to provide assurance? How can we connect these back to challenges specific to rail?’ We are kickstarting work on this journey to an AI-powered railway, and we’re thinking about what engagement with the sector looks like. We will of course work closely with the DfT, ORR, experts and other actors and we want to say to other rail leaders: ‘Come and join us. Please tell us about your use cases and ambitions. We are interested in your AI applications and what concerns you have so we can incorporate that in the framework and so others can also learn from your experience. We can go on the journey together.’
Our Futures Lab is exploring emerging needs and testing new ideas. Visit our website for more information.
View page