Why Every QS and Commercial Manager Needs to Know About the New RICS AI Rules

October 17, 2025
3 minute read.
William Doyle
William Doyle
CEO at Gather

The RICS has just introduced new rules that will reshape how we use AI in construction. By March 2026, every chartered surveyor must comply with the world's first mandatory AI standard.

This isn't just another guidance note you can ignore.

These rules will fundamentally change how you manage data, assess risk, and deliver projects. Here's what you need to know and why it matters to your bottom line.

Four Game-Changing Requirements You Can't Avoid

The RICS Responsible Use of AI in Surveying Practice standard introduces mandatory rules that go far beyond what I would expect as "typical professional guidance".

Every chartered surveyor must now meet four critical requirements:

Baseline AI Knowledge

Before you can use any AI tool in practice, you must demonstrate understanding of AI fundamentals, including how machine learning works, bias risks in training data, data privacy concerns, and the risk of erroneous outputs or "hallucinations" that could mislead professional judgment.

This isn't optional upskilling – it's a professional requirement. RICS explicitly states that familiarity with AI is currently "uneven among surveyors," so they've set a baseline knowledge threshold that everyone must meet.

Client Transparency

You must inform clients clearly, in writing and in advance, whenever an AI system with material impact will be used in delivering a service. This disclosure should specify when and for what purpose AI will be involved.

More significantly, your standard engagement documents must include details on what parts of the process will involve AI, whether your professional indemnity insurance covers AI-related issues, how clients can contest AI use, what redress mechanisms are available if clients feel harmed by AI, and whether clients can opt out of AI usage entirely.

This level of transparency treats AI involvement almost like a conflict of interest that must be disclosed. Clients have the right to know and even to say "no, I prefer a human-only approach" if you can offer one.

Document Everything

You must maintain comprehensive records, including:

  • AI risk registers with quarterly reviews documenting bias risks, erroneous outputs, and data concerns
  • Written assessments before deploying any AI that will have a "material impact" on surveying services
  • Documented decisions on the reliability of AI outputs, with named surveyors taking responsibility
  • Due diligence records from AI vendors covering environmental impact, data compliance, and bias evaluation

Professional Liability

Perhaps most importantly, a qualified, named surveyor must take personal responsibility for every AI-generated output. RICS makes it clear that human oversight cannot be replaced by automation, and surveyors remain professionally accountable for AI outputs as if they were produced manually.

From Excel Spreadsheets to AI-Powered Commercial Intelligence

For QSs and commercial managers, this standard arrives at a pivotal moment. The industry is drowning in administrative overload whilst simultaneously missing crucial commercial opportunities.

Research shows that manual diary reviews capture only 60% of legitimate change events. That missing 40% represents massive revenue leakage directly impacting project profitability. This isn't incompetence – it's inevitable when junior staff review complex commercial scenarios without senior-level contract knowledge.

Consider the typical scenario: apprentices or junior QSs handle diary reviews, seeing "concrete delayed 2 hours – weather" and filing it away. They miss that this is part of a cumulative disruption pattern triggering compensation clauses. When senior staff finally review their work months later during final account preparation, they discover £50,000 worth of legitimate claims have been unknowingly filed away. But by then, notice periods have expired.

The RICS standard now provides a framework for using AI to address these challenges whilst ensuring professional accountability. However, it requires significant changes to how we work.

The Six-Month Reality Check

Six months sounds like plenty of time until you break down what actually needs to happen. The reality is sobering when you look at the data.

The Current State of Play

According to the 2025 RICS artificial intelligence in construction report, about 45% of construction and surveying firms have no AI capability whatsoever, with another 34% still stuck in pilot exploration phase. That means nearly 80% of the industry is scrambling to both implement AI and comply with RICS rules simultaneously.

What Six Months Actually Means

  • Q1 2025: Audit current tools, assess team knowledge, draft policies
  • Q2 2025: Implement training, update contracts, build risk registers
  • March 2026: Full compliance required

The Three Categories Emerging

Early Movers (10-15% of firms): The minority already using AI can adapt existing governance frameworks. Industry analysis suggests these firms will gain "competitive advantage for those who move early" in AI adoption, and they can market their compliance as a differentiator to clients.

The Scrambling Majority (70% of firms): Racing to implement both AI adoption and compliance simultaneously. These firms risk making expensive mistakes or creating overly bureaucratic processes that slow down operations rather than enhance them.

The Ostriches (15-20% of firms): Hoping this will go away or that enforcement will be light. They're gambling with their professional standing and may find themselves locked out of major projects where clients demand RICS-compliant AI practices.

The research shows conditions are "ripe for widespread adoption of AI in construction in the next 12-24 months" – but that adoption must now happen within RICS's compliance framework. If you're reading this thinking "we'll sort it out next quarter," the data suggests you're likely in the scrambling majority.

Three Immediate Actions Every QS Must Take

Your March 2026 compliance depends on starting now. Here's your essential checklist:

1. Audit Your Current AI Usage

List every tool that might count as AI, from basic automation to advanced analytics. The standard applies to AI with "material impact" on surveying services, which includes:

  • Automated valuation models (AVMs)
  • AI-powered cost estimation tools
  • Document analysis software
  • Predictive analytics for project performance
  • Any software using machine learning for decision support

Don't assume simple tools are exempt. If an algorithm influences your professional judgement or client deliverables beyond a trivial level, it likely qualifies.

2. Assess Your Team's AI Knowledge

RICS requires baseline knowledge before using AI tools. Conduct honest skills assessments across your team:

  • Do they understand how machine learning models work?
  • Can they identify potential bias in AI outputs?
  • Do they know data privacy requirements when using AI services?
  • Can they explain AI limitations to clients in plain English?

For most teams, significant training gaps will emerge. Plan for substantial CPD investment over the next six months.

3. Review Your Client Contracts

Update engagement letters immediately to include AI disclosure requirements:

  • Clear statements about when and how AI will be used
  • Professional indemnity insurance coverage for AI-related issues
  • Client rights to contest or opt out of AI usage
  • Redress mechanisms if AI causes problems
  • Transparency about your AI governance processes

Consider this an opportunity to differentiate your firm as forward-thinking yet trustworthy.

The Opportunity Hidden in the Compliance Burden

Rather than viewing this as a bureaucratic burden, smart firms will leverage compliance as a competitive advantage.

Client Trust and Differentiation

Clients increasingly worry about AI's role in professional services. By proactively demonstrating responsible AI governance, you signal innovation combined with professional rigour. This could become a key differentiator in tender processes.

Operational Excellence

Proper AI governance forces systematic thinking about data quality, process documentation, and risk management. Many firms will discover that compliance requirements actually improve their overall operational standards.

Future-Proofing

The standard ensures you're prepared for broader regulatory changes. As the EU AI Act and other regulations emerge, RICS-compliant firms will find themselves ahead of the curve rather than scrambling to catch up.

Revenue Protection

Most importantly, the standard's emphasis on documentation and professional oversight helps ensure you capture the full commercial value of your work. No more missed change events due to poor record-keeping or inadequate analysis.

Common Compliance Pitfalls to Avoid

Based on early implementation experiences, several pitfalls are emerging:

Underestimating the "Material Impact" Threshold. Firms assume basic tools won't qualify, then discover their cost estimation software uses machine learning algorithms that significantly influence project decisions.

Inadequate Vendor Due Diligence. The standard requires extensive information from AI suppliers about data bias, environmental impact, and legal compliance. Many vendors aren't prepared for these inquiries.

Box-Ticking Mentality: Creating risk registers and policies without genuine engagement leads to compliance that looks good on paper but fails in practice. RICS will likely focus on substance over form during any disciplinary proceedings.

Client Communication Failures: Simply adding AI clauses to contracts isn't enough. Teams must be able to explain AI usage in plain English and handle client concerns professionally.

The Choice Every QS Faces in 2025

You have six months to get this right. The firms that embrace responsible AI governance now will be winning the best projects in 2027, whilst those who ignore or grudgingly comply with these requirements will find themselves at a disadvantage.

This standard represents more than regulatory compliance – it's RICS positioning the surveying profession as leaders in responsible technology adoption. By getting ahead of this curve, you're not just avoiding professional discipline; you're future-proofing your career and your firm.

The question isn't whether you can afford to invest time and resources in RICS AI compliance. The question is whether you can afford not to.

What's your firm's biggest compliance challenge? Are you ready for March 2026?

Key takeaways
  • - Mandatory compliance by March 2026 requires four critical changes: baseline AI knowledge for all staff, written client disclosure before using AI tools, comprehensive documentation including quarterly risk registers, and named surveyor accountability for every AI output.
  • - 80% of construction firms are unprepared, with 45% having no AI capability and 34% still exploring pilots. The six-month timeline means firms must start auditing tools, assessing team knowledge, and updating contracts in Q1 2025 to avoid scrambling.
  • - Compliance creates competitive advantage rather than bureaucratic burden. Smart firms will leverage AI governance to differentiate in tenders, improve operational standards through systematic documentation, and future-proof against broader regulatory changes like the EU AI Act whilst early movers position themselves to win the best projects in 2027.
  • Stay ahead of the curve

    Our monthly email newsletter keeps you up-to-date with best practices in project management, contech implementation and NEC4.

    Thank you! Your submission has been received!
    Oops! Something went wrong while submitting the form.