top of page

From Sci-Fi to Reality: Professor Ajay Chakravarthy on the future of safe and trusted AI

  • Feyzan Fullerton
  • Aug 29
  • 8 min read
Photograph of Prof Ajay Chakravarthy wearing a blue suit on a plain background.

Artificial intelligence might be the buzzword of the moment, but for Ajay Chakravarthy, it’s been central to his work for the last two decades. Now Head of AI at Thales UK, Ajay is applying AI in environments where there’s no room for error.


At Thales, he leads two major areas. The first is the CortAIx Factory, the company’s internal initiative to take existing or new AI models and “harden” them for use in mission-critical systems. The second is overseeing AI adoption across Thales UK as part of the Digital and Data Competence Centre. “The goal with CortAIx is to assure, govern and prepare AI to a level where it can be deployed into systems where the consequences of failure are high. Thales works in sectors like defence, space, transport, and cybersecurity, so these systems must be robust, repeatable and trusted.”


Ajay’s journey into AI began with a fascination with science fiction and deepened through a master’s and PhD in semantics, the study of how meaning is attached to data. He later worked at the University of Southampton’s IT Innovation Centre, at the government’s Defence, Science, & Technology Laboratory (DSTL)and in Counter Terrorism Policing, before joining Building Digital UK and then Thales.


He’s also a Visiting Professor at the University of Sheffield, which helps him stay closely connected to the academic landscape. “It gives me a way into the research community. I get to see what’s coming, work with everyone from PhD students to professors, and bring that thinking into industrial work. It goes both ways too, I can feed real-world challenges back into academia so the research stays relevant.”

 

Ajay sees this connection as a practical way to accelerate innovation. “There are problems which would cost a huge amount to solve in industry, but you can give them to a group of four students with a bit of structure and get something 80% there in a few weeks. That kind of capability is massively underused.”


Across all these roles, Ajay has remained hands-on with the evolution of AI itself. “I've been doing AI in one form or another for 20 years. From traditional machine learning to adversarial AI, and now with large language models and agentic AI, it’s been a constant evolution. But the aim is always the same: how do we get value from AI, safely and at pace?”


AI is making a difference

Ajay highlights four main areas where he sees AI delivering the most real-world impact, and each is based on in his experience across sectors.


Predictive AI for Structured Data

In sectors that collect large amounts of structured data, like oil and gas, energy, or transport, AI is enabling more accurate predictions and maintenance planning. “We’re getting more and more data from sensors, which is highly structured. When you combine that with fault reports, there’s a lot you can do in terms of predictive maintenance and usage optimisation. That can lead to huge cost savings, especially in energy efficiency and performance.” Ajay notes that the underlying methods aren’t always new, but the ability to apply them at scale has improved dramatically.


Large Language Models and Knowledge Access

Ajay sees large language models (LLMs) playing a transformative role in back-office functions, where information is often locked in documents and systems that don’t talk to each other. “There are still big organisations with poor knowledge management, things are siloed. What LLMs offer is a quick way to access and understand that information, especially when you use them alongside retrieval-augmented generation, (RAG allows LLMs to access and use information beyond their pre-trained parameters leading to a more informed response). You don’t need to be a data scientist anymore, you can just ask questions in natural language and get meaningful answers. You don’t even need to bring all your documents into one place, the LLM can connect the dots and serve up what you need, without deep technical skills.”


Agentic AI and Task Automation

One of the most exciting developments Ajay sees is the rise of agentic AI, AI that doesn't just answer questions but completes tasks. “It’s still early, and the word 'agentic' is being used a lot right now, not always correctly, but it is real. Take Comet, the browser being developed by Perplexity AI. It’s agentic by default. You don’t just search anymore. You say, ‘Here’s my shopping list, go to my usual site, find me the best deals, order it, here’s my card, deliver it tomorrow.’ And the browser does all of it. That’s a complete change in how we interact with technology.” Ajay believes this approach won’t be limited to shopping or web use. “Software development is a good example. The way we write code is going to change. Most of the routine work can already be done by AI. So we’ll see software engineering itself evolve.”


Mission-Critical AI and Assurance

At Thales, a significant focus for Ajay is ensuring that AI can be used in mission-critical contexts, places where failure has serious consequences. “If AI on a shopping site goes wrong, fine, you lose some money, but if something fails in a plane or satellite, the consequences are much higher. That’s why we need AI assurance, a whole new area focused on making sure the AI is hardened and trusted.” He highlights that the challenge isn’t just technical accuracy, but human trust. “You have to get to a point where a pilot or a commander is willing to say, ‘I’ll take accountability for what the AI is doing, because I trust it.’ That’s a very high bar.”


The Importance of Strong Foundations

Ajay is clear that AI only delivers value when built on solid data infrastructure, a lesson he took from his time leading data work at Building Digital UK (BDUK). “The key enabler for AI is getting your foundations right, having a data platform where everything is normalised and in one place. At BDUK, we worked with partners like Google, Solirius, and Softserve to build the data backbone. Because we did the fundamentals well, AI can be deployed on top of that with much less friction.” He emphasises the role of semantic layers, meaning-based structures that help systems interpret data more intelligently. “Once you’ve got that semantic layer, agentic AI can work really effectively. Even if your data isn’t perfect, the semantic mapping helps make the connection and gives you transparency over what’s clean and what isn’t.”


Ethics, Risks and AI Safety

Alongside deployment, Ajay also focuses on ethical and safety implications, especially in sectors like defence and healthcare. “There’s already a lot of research on AI safety, but much of it is theoretical. The harder part is putting it into practice. For example, we were supposed to see more autonomous vehicles on the roads by now, but it hasn’t happened, partly because AI safety hasn’t been addressed at scale.” Ajay praises the UK’s approach to ethics, particularly in military settings, but notes that not every country shares that mindset. “We have strong values around ethical deployment, but there’s a geopolitical shift happening. Technology is where power is moving, and you can see that in the fact that world leaders now invite tech CEOs to their most important meetings.”


There are also technical risks to consider, especially hallucinations, where AI models generate false or misleading outputs. “Hallucinations can be managed, for example with semantic layers and setting creativity levels to zero. But even then, you need consistency and repeatability. If I run a model today and get one result, and run it tomorrow and get another, that doesn’t build trust. There are also important considerations to be made around acceptable error thresholds. Right now we tolerate human error but what are we willing to accept when it’s AI? Clearly laying out the risks and identifying what is acceptable within safe limits will be critical to the decision making in this area.”


Cybersecurity and Dual-Use Technology

Ajay flags cybersecurity risks as an emerging concern in AI deployments, particularly the ways models can be manipulated. “Even if your architecture is secure, the model itself can be vulnerable. You can poison the source data it’s trained on, or spoof the model. In some cases, attackers can even recreate a black box model by sending enough queries and analysing the responses so AI-specific cybersecurity is an emerging and essential skill.”


He also warns about dual-use concerns, where the same AI tools that benefit society can be used for harm. “You can train an LLM to carry out fraud, or use voice cloning to access someone’s bank account. That’s just one example of many, so the question becomes: how do we detect misuse, and how do we stay ahead?”


Societal Impacts and Jobs

As AI adoption accelerates, Ajay is increasingly thinking about the social consequences, especially around job displacement and public understanding. “Whenever I get into a taxi and tell the driver I work in AI, the first question is: how long until my job goes? There’s a nervousness building in society. But ironically, the first jobs affected are in tech itself, like software engineering.”


He’s concerned about a divide between people who embrace AI and those who are left behind. “The pace of change is so fast that whole sections of society could miss out. We’re already seeing huge salaries for AI engineers, it’s become a new elite group. If we don’t think about inclusion and education now, that divide will get worse.”


Stay Focused on the Problem

With AI becoming ever more powerful and accessible, Ajay has one core piece of advice for organisations: “Don’t start with the solution. Start with the problem. I’m seeing a lot of people saying, ‘We need AI,’ and then going looking for something to use it on. That’s backwards. Define the problem, clean your data, and build from there.” He’s also clear that real deployment still takes time. “It’s easy to build a prototype with a prompt. But deploying that into a hardened, compliant, and trustworthy system, especially in mission-critical environments, takes real engineering. We need to manage expectations.”


Projects That Made a Difference

Two projects stand out for Ajay from his recent roles. “At BDUK, we went from an idea to live deployment of the data backbone in 18 months. We had strong partnerships and a mix of culture change, strategic thinking and good tech. It’s now supporting decision-making and AI across the organisation.”


He also points to the University Innovation Concept he led while in Counter Terrorism Policing. “We worked with 15 universities, over 200 academics, 11 nationalities from early-stage students to heads of department and delivered 10 projects in a year. Many of those prototypes were pulled through into operational use. It was one of the most diverse and productive environments I’ve worked in.”


Looking Ahead

Ajay’s eyes are on what he calls the “big bets” in AI: physical AI, quantum AI, and medical AI. “Physical AI, robotics powered by AI, is going to be huge. Quantum AI is coming fast, and when it hits, it’ll be a game-changer. And then there’s medical AI, like the work DeepMind did with protein folding, which could eventually help cure cancer. That’s the kind of future we’re heading towards.”


Who He’s Watching

To stay current, Ajay relies on a combination of visual and curated content. “I learn best visually, so I follow YouTubers like Veritasium and FloatheadPhysics. They’re great at breaking down complex topics. I like Bill Gates’s articles on the intersection of technology, ethics and policy. I also keep up with news through Chrome’s aggregator, it serves me content based on what I’ve been reading. That’s AI working for me.”

 
 
bottom of page