Ensuring AI is used responsibly is critical to safety and security
Since the UK government unveiled its AI Opportunities Action Plan earlier this year, more businesses than ever before are turning to artificial intelligence (AI) in an effort to boost productivity. The strategy projects that, if AI is completely adopted, the benefits might total up to £47 billion annually for the UK over a ten-year period. John Farrow, the digital project principal for Mott MacDonald in the UK and Europe, explores how the infrastructure sector might realize this potential without sacrificing security and safety.

We realize that achieving this goal safely and ethically won’t be easy, but it is not unattainable because we have worked with significant government clients to deploy AI, and we have personally witnessed the advantages. AI has recently been used to track drawings and employ design codes in one area of a large infrastructure project, which could result in a £0.5M reduction in work hours. This may seem insignificant on a multimillion-pound project, but the value increases if AI is implemented to increase efficiency in other areas of the project as well.
The government’s AI plan’s launch increases the urgency of developing, acquiring, implementing, and embracing AI capabilities and technology. Like any new technology, there are hazards that must be considered, but with AI, the risks may be higher than with previous digital solutions if proper knowledge is lacking. Because of these difficulties, it is crucial to have human oversight and strong governance when implementing AI.
AI risks
Significant obstacles include the possibility of biased algorithms, privacy and data protection issues, and the moral ramifications of algorithmic and autonomous decision-making. AI systems may increase the attack surface for cyber attacks and perform activities that are difficult for human operators to comprehend or predict. Malicious actors may utilize AI to improve their attacks on vital infrastructure or directly target AI systems.
An over-reliance on AI decision support tools may result in errors and inefficiencies in operations, which could ultimately cause assets to collapse. Without sound governance, AI’s hazards might exceed its advantages, which would breed public mistrust and prompt government intervention.
It is crucial to use synthetic test questions to manually verify how the AI is operating and ground truthing to assess the results provided by the AI. This has been highlighted in the work we have submitted for a significant government contract, where creating the business case necessitates making sure 8,000 papers are in compliance. With human methods, that would be nearly impossible, but the time required can be greatly decreased by employing a retrieval enhanced generation database and searching with massive language model AI. To guarantee confidence in the result and learn how to increase accuracy and dependability, inspection and assessment are still crucial.
Why governance matters
The frameworks, rules, and procedures that guarantee AI systems are created and used responsibly are referred to as AI governance. Beyond following rules, ethics in AI deployment entails a dedication to acting morally even in situations when it is not required by law.
We recently worked with a significant government client to create a secure and moral AI-based analysis as an illustration of the issues we assist in addressing. This approach produced a thorough data dashboard while guaranteeing the protection of sensitive data. Because of the volume and nature of the data, manual methods would be too time-consuming; nonetheless, the data’s characteristics raised concerns about whether AI could analyze the data safely. Even though the technological solution was only developed in a few days, it was crucial for the teams to think about the potential risks of utilizing AI and the consequences of any failure before they got to that point.
We bring together subject matter experts and technical specialists in AI to advise clients on the risk while simultaneously challenging, testing, and verifying the solution to make sure the AI is functioning as intended in order to overcome the ethical, legal, and security issues.
We combine subject matter expertise with technical specialists in AI to help clients navigate the legal, security, and ethical issues. We also challenge, test, and verify the solution to make sure the AI is functioning as intended.
Six steps for responsible AI use
Businesses that wish to employ AI sensibly ought to:
- Adapt AI governance to organizational values and risk tolerance. Clearly state acceptable applications of AI and steps to resolve any ethical dilemmas. Create a consistent narrative around the organization’s AI strategy that reflects its ideals.
Obtain organization-wide buy-in and support from the leadership. In order to advance best practices and affect cultural change within the company, leadership commitment must be made as soon as possible.
Start small and expand gradually. To reduce risks, use current governance frameworks and procedures to develop an AI evaluation and seek outside assistance until AI reaches a mature state.
- Include a range of viewpoints: To foster confidence in the application of AI, include communities, consumers, and other stakeholders in conversations regarding its usage in projects.
Encourage AI literacy by making sure all staff members have the abilities, resources, confidence, and skills necessary to embrace the responsible use of AI while also being aware of their own knowledge gaps and the technology’s potential.
Keep an eye on regulatory changes to stay up to date on changing AI laws and make the required adjustments to policies and procedures.
Understand your risk
It is impossible to overestimate the significance of ethical decision-making and fit-for-purpose governance as AI continues to transform sectors. Every AI use case has unique governance issues and moral conundrums. Organizations that continue to operate without comprehending and reducing these risk effects may encounter difficulties in the future, from fostering prejudices to creating new security flaws. In addition to putting a company and its employees at greater danger, careless AI applications and practices can have a detrimental effect on the communities they serve as well as society at large.
Adopting responsible AI is essential for more than just maintaining competitiveness; it also allows leaders to lead with honesty and vision, allowing them to confidently traverse the AI environment. Organizations will be more prepared for the future, when AI will play a crucial role in every facet of infrastructure creation and management, if they begin the road today. Assessing AI proficiency throughout your company and supply chain is probably the first step in the process.