"Keeping our national critical infrastructure safe and secure requires novel techniques" – Interview with Paul Butcher, AdaCore
Posted on October 16, 2025
What is the most disruptive trend shaping the future of high-integrity software today?
The most disruptive trend I see today is the continued drive for secure-by-design principles, specifically the increasing flow-down of high-level requirements that insist on memory-safe software and hardware architectures across all critical domains. The emphasis is on building truly robust systems that are verified to be safe and secure, where exploit potential is eliminated at the architectural level.
While multiple solutions exist in the form of development tooling and processes, it's essential to recognise that there is no silver bullet. This presents architectural designers with a quandary, as they must assess the available options and ensure they understand the benefits, gaps, and overlaps across a vast array of development tools and deployment solutions.
Techniques range from formal proof via tools such as SPARK to qualified toolchains, memory-safe languages like Ada and Rust, secure hardware solutions like the CHERI microprocessor architecture, as well as other forms of static and dynamic analysis, including fuzzing. This systemic shift validates the core tenet of high-integrity engineering: tackling system robustness and security from the ground up, rather than merely meeting compliance standards.
How is society's growing dependence on software changing the role of high-integrity systems?
Society's growing dependence on complex, interconnected systems is changing the role of high-integrity systems by fundamentally merging the concepts of safety, security, and robustness into a single, non-negotiable requirement for dependability.
While high integrity was once confined to traditional domains such as defense, aerospace, rail, and nuclear, the catastrophic consequences of software failure now extend into daily life, from critical infrastructure to connected vehicles. This dramatically expands the relevance of our sector, compelling us to bring rigorous assurance methods - such as the combination of language-level checks, comprehensive static analysis, and other dynamic analysis techniques - to a much broader, more vulnerable class of "systems of systems" operating in openly malicious environments.
This expansion accelerates fundamental technical challenges for high-integrity design, particularly the problem of processor qualification, as the industry transitions from more straightforward-to-qualify, safety-critical, single-threaded components to complex, multi-core architectures that require formal non-interference arguments.
What's one challenge in high-integrity software that keeps you up at night, and how are you tackling it?
The primary challenge that concerns me is the speed at which our industry needs to evolve - both to keep pace with adversaries, and to ensure our national critical infrastructure remains safe and secure. Responding to this challenge requires new and novel software development techniques that ensure we retain high assurance while reducing the time to certification.
AdaCore is focused on solving this problem through advances to our development tooling, including:
- Enhanced user feedback through our formal proof application tooling, SPARK
- Commitment to provide long-term support for compiler toolchains
- The expansion of programming language support in static analysis tooling through the CodeSonar toolkit
- The continued evolution of our dynamic analysis solutions to ensure we keep up with industry needs for code coverage specifications, as well as our fuzz testing solutions.
Automation is key. For example, by using highly automated tools like GNATfuzz that leverage compiler instrumentation and symbolic execution, we can generate and execute millions of test cases to find exploitable corner-case bugs that human-driven testing is almost guaranteed to miss.
What role will AI and machine learning play in the future of high-integrity systems?
I foresee AI and machine learning playing a growing but carefully managed role in the future of high-integrity systems, primarily in the short term, as a productivity multiplier and an assistant for assurance activities. We are already researching ways to implement AI in fuzz testing, for example, to optimize input generation and campaign execution.
For high-integrity systems themselves, however, the direct use of uncertified AI/ML for critical control will likely be constrained by rigorous safety monitors or other guardrails. The real value now lies in leveraging Generative AI to assist in complex, time-consuming tasks, such as generating code or contracts, which are then paired with high-assurance tools like SPARK to formally verify the generated content for correctness.
What's one piece of advice you'd give to the next generation of engineers entering this field?
My advice would be to embrace a holistic, multi-layered approach to assurance and be prepared to learn verification techniques across the entire spectrum. The most effective engineer in this field is one who understands that no single tool solves all assurance problems. Instead, assurance comes from a cycle of complementary methods: for the highest level of assurance, you need the formal proof of static analysis, the precision of a high-integrity language like Ada or Rust, and the exhaustive bug-finding capability of dynamic tools, such as fuzz testing. You can focus on building and generating sound arguments about the security and safety of your code, as this expertise will always be in demand, regardless of which programming language, hardware architecture, or software development tool is currently trending.
What's one innovation or tool you believe will redefine how we build trustworthy systems?
One innovation that will redefine trustworthiness is the integration of advanced dynamic analysis tools, specifically compiler-assisted fuzzing, directly into the high-assurance workflow. Traditionally, security testing was a late-stage, time-consuming manual activity; however, it doesn't have to be this way.
The industry can overcome the barriers to adoption by utilizing tooling that automates the generation of test harnesses and leverages robust error detection mechanisms provided by language runtime checks, compiler sanitizers, and state-of-the-art hardware security capabilities, such as those offered by architectures like CHERI.
Bringing vulnerability testing to the developer's desktop is perfectly feasible, ensuring that security verification is considered earlier in the lifecycle. This automation provides a practical and efficient mechanism for executing millions of test cases to refute the presence of exploitable vulnerabilities, effectively making exhaustive negative testing an accessible and standard part of the build process for every high-integrity system.
What excites you most about the future of high-integrity software?
What excites me most is the convergence of high-assurance methodologies, leading to a new era where secure and safe systems can be built faster and more reliably than ever before. Seeing core high-integrity principles, such as memory safety, determinism, and strong typing, being adopted by new languages and mandated by global regulatory bodies confirms the value of the high-integrity approach.
This momentum is breaking down the barrier between safety and security, and the ability to integrate cutting-edge tools - such as the synthesis of static verification with dynamic and static analysis, including exhaustive bug discovery via techniques like fuzzing - means we can deliver dependable software at scale and speed across industries like automotive, space, and defence.
