New Zealand’s economy has been described as a ““. Buying and selling property is a national sport fuelled by the rising value of homes across the country.
But the wider public has little understanding of how those property valuations are created – despite their being a key factor in most banks’ decisions about how much they are willing to lend for a mortgage.
Automated valuation models (AVM) – systems enabled by artificial intelligence (AI) that crunch vast datasets to produce instant property values – have done little to improve transparency in the process.
These models started gaining traction in New Zealand in the early 2010s. The early versions used limited data sources like property sales records and council information. Today’s more advanced models include high-quality geo-spatial data from sources such as .
AI models have improved efficiency. But the proprietary algorithms behind those AVMs can make it difficult for homeowners and industry professionals to understand how specific values are calculated.
In our , we are developing a framework that evaluates these automated valuations. We have looked at how the figures should be interpreted and what factors might be missed by the AI models.
In a property market as geographically and culturally varied as New Zealand’s, these points are not only relevant – they are critical. The rapid integration of AI into property valuation is no longer just about innovation and speed. It is about trust, transparency and a robust framework for accountability.
AI valuations are a black box
In New Zealand, property valuation has traditionally been a labour-intensive process. Valuers would usually inspect properties, make market comparisons and apply their expert judgement to arrive at a final value estimate.
But this approach is slow, expensive and prone to human error. As demand for more efficient property valuations increased, the use of AI .
But the rise of these valuations models is not without its challenges. While AI offers speed and consistency, it also comes with a critical downside: a lack of transparency.
AVMs often operate as “black boxes”, providing little insight into the data and methodologies that drive their valuations. This raises serious concerns about the of these systems.
What exactly the algorithm is doing when an AVM estimates a home’s value is not clear. Such opaqueness has real-world consequences, perpetuating market imbalances and inequities.
Without a framework to monitor and correct these discrepancies, AI models risk distorting the property market further, especially in a country as diverse as New Zealand, where regional, cultural and historical factors significantly influence property values.
Transparency and accountability
A recent discussion forum with real estate industry insiders, law researchers and computer scientists on highlighted the need for greater accountability when it comes to AVMs. Transparency alone is not enough. Trust must be built into the system.
This can be achieved by requiring AI developers and users to disclose data sources, algorithms and error margins behind their valuations.
Additionally, valuation models should incorporate a “confidence interval” – a range of prices that shows how much the estimated value might vary. This offers users a clearer understanding of the uncertainty inherent in each valuation.
But effective AI governance in property valuation cannot be achieved in isolation. It demands collaboration between regulators, AI developers and property professionals.
Bias correction
New Zealand urgently needs a comprehensive evaluation framework for AVMs, one that prioritises transparency, accountability and bias correction.
This is where our research comes in. We repeatedly resample small portions of the data to account for situations where property value data do not follow a normal distribution.
This process generates a confidence interval showing a range of possible values around each property estimate. Users are then able to understand the variability and reliability of the AI-generated valuations, even when the data are irregular or skewed.
Our framework goes beyond transparency. It incorporates a bias correction mechanism that detects and adjusts for constantly overvalued or undervalued estimates within AVM outputs. One example of this relates to regional disparities or undervaluation of particular property types.
By addressing these biases, we ensure valuations that are not only accountable or auditable but also fair. The goal is to avoid the long-term market distortions that unchecked AI models could create.
The rise of AI auditing
But transparency alone is not enough. The auditing of AI-generated information is becoming increasingly important.
New Zealand’s a qualified person to check information generated by AI and subsequently used in tribunal proceedings.
In much the same way financial auditors ensure accuracy in accounting, AI auditors will play a pivotal role in maintaining the integrity of valuations.
Based on earlier research, we are auditing the artificial valuation model estimates by comparing them with the market transacted prices of the same houses in the same period.
It is not just about trusting the algorithms but trusting the people and systems behind them.