...

The ‘Iron Cage of Rationality’ Could Hinder the Revolutionary Potential of AI in Environmental Planning

The 'Iron Cage of Rationality' Could Hinder the Revolutionary Potential of AI in Environmental Planning

The combination of affordable environmental sensors and AI-powered analytical tools holds the potential for faster and more insightful environmental planning. This is particularly important now due to the need for better decision-making in light of proposed changes under the Fast-track Approvals Bill, which require quicker assessments.

Our research at Kuaha Matahiko, a collaborative project focused on compiling land and water data, has revealed a strong interest in AI among iwi and hap? (tribal) groups. Environmental guardian organizations that are already stretched thin see the potential for AI to integrate fragmented environmental datasets and improve analytical capacity in a cost-effective manner.

To address this need, the Kuaha Matahiko project has developed an AI system trained on environmental data from Aotearoa New Zealand. This indicates that bespoke AI solutions are becoming a viable option for kaitiaki groups, including smaller ones.

However, caution is necessary. Previous experiences have shown that algorithm-powered systems can perpetuate existing inequalities in data gathering and limit imagination regarding outcomes.

These issues arise from two interconnected problems: a history of ad hoc data gathering and the misconception that larger data volume equates to better accuracy.

The “precision trap” refers to the risk of mistaking a high volume of data for high accuracy. A study on precision agriculture highlights the dangers of overestimating the precision of big data, which can lead to a lack of checks and balances.

As algorithms become increasingly opaque, there is a growing risk of blindly accepting the accuracy of AI outputs. This is concerning because numbers are often regarded as objective “hard facts” with political, social, and legal implications.

To avoid falling into an “iron cage of rationality,” it is crucial to establish inclusive, intelligible, and diverse AI partnerships. This involves recognizing social histories of data gathering and actively addressing past data gaps.

Data and AI should serve human goals, and indigenous data sovereignty movements advocate for the rights of indigenous people to own and govern data about their communities. Frameworks such as CARE (collective benefit, authority, responsibility, and ethics) prioritize flourishing human relationships.

Expanding the worldview of AI is essential. Currently, the field is dominated by a “WEIRD” standpoint (western, educated, industrial, rich, developed), and it is crucial to incorporate diverse perspectives. This includes developing AI systems that embody indigenous knowledge and worldviews.

To avoid limiting future possibilities, it is necessary to embrace a radical vision of AI that is built on diverse worldviews and avoids locking us into predetermined paths.