Just like anything else, AI (artificial intelligence) brings opportunities and risks—promise and peril. And, just like anything else, we must be prepared for it. The next logistical question is should we consider the launch of standards to guide the responsible use of AI in various industries?
For today’s blog, let’s consider the example of surveying. First, let’s address the opportunities and risks. AI can help here in several ways including with automated image analysis, 3D terrain modeling, land use classification, planning simulation, progress monitoring, quality assurance, and predictive maintenance, just to name a few. Ultimately, it will help map faster.
Of course, challenges exist here too. Workers still need to be able to make certain survey precision, and AI models require training to ensure quality data. There are also always questions about cost, data privacy, and regulatory compliance.
Now, let’s answer the question at hand: Should we consider the launch of standards to guide the responsible use of AI in various industries? One organization says yes.
The Launch of AI Standards
The RICS (Royal Institution of Chartered Surveyors) has published a global professional standard for the responsible use of artificial intelligence in surveying.
Set to take effect in March 2026, the new standard sets out mandatory requirements and best practice expectations for RICS members and regulated firms worldwide. It addresses the growing integration of AI across valuation, construction, infrastructure, and land services—and the standard aims to ensure these tools are used ethically, transparently, and with professional oversight.
Some of the key provisions of the new standard include governance and risk management, professional judgment and oversight, transparency and client communication, and ethical development of artificial intelligence.
The standard sets requirements including:
- provide a basis for upskilling the profession.
- represent a baseline of practice management at regulated firms, aimed at minimizing the risk of harm caused by AI systems in the delivery of services
- enable informed and clear decisions to be made on AI procurement and reliance on AI outputs
- represent good communication and information sharing with clients and other relevant stakeholders and provide a framework for the responsible development of AI systems by members and regulated firms.
Of course, surveying is only one example. Other industries are also working on creating standards for the use of responsible AI including healthcare, financial services, automotive, energy, manufacturing, and more.

The key point here is AI isn’t inherently good or bad. It is simply a tool. The opportunities are huge, but only if the risks are managed responsibility. This can be done in a few ways such as through transparent governance and ongoing public dialogue with both internal and external stakeholders. The bottomline here is policies and ethics frameworks are key to ensuring AI is developed and used for the benefit of all.
Want to tweet about this article? Use hashtags #surveying #construction #IoT #sustainability #AI #5G #cloud #edge #futureofwork #infrastructure
