Skip to content
Neoval Docs

The Distribution Model

Actually Useful Property Price Indices

The Australian housing market is highly complex, characterized by significant regional variations and rapid shifts in price dynamics. Accurately measuring and understanding this market is crucial for a wide range of audiences including policymakers, financial institutions, investors, and individual home owners or purchasers. Traditional home price indices, however, lack the sophistication and granularity required to capture the full picture and are of limited utility.

A modelled average home price index produced at a monthly interval, and the capital city or state level, could be used to make broad macroeconomic statements about housing. However, it could not be used to comment on / reason about / act upon differing movements across that large region. Nor could it be used to study the impact of some policy or infrastructure change targeting smaller - it would simply lack the geographical resolution to do so. Similarly, it could not be used to provide a credible estimate of the current price and capital growth of a specific home, or a collection thereof - there is simply too much variation in home price levels and movements across cities and states for such estimates to be meaningful.

A naive aggregated home price index for smaller (read: suburb / postcode level) regions could be constructed by taking the average price of homes in those regions over a large rolling period (eg. the twelve months trailing the current period). Such an index would, at best, provide a lagging price indicator. More likely, the aggregation's sensitivity to compositional change in housing stock, or periods of low transaction volumes, would render its outputs implausible / unusable for the intended purpose.




This document introduces the Neoval Distribution Model, the result of our extensive research into the application of modern machine learning techniques to tracking home prices. The result is a set of indices with unprecedented timeliness, resolution, and fidelity - offering a more comprehensive and nuanced view of the housing market, and opening the door to new analytical use cases.

The indices cover all states and territories in Australia, producing price measurements at a weekly frequency, and for any region at the ABS SA2 level, or above. Moreover, they describe the full distribution of potential price outcomes within a region in that week - providing a quantification of uncertainty and risk, while also allowing for the analysis of price movements at any specific quantile.

These are actually useful property price indices. Among other applications, the indices can be used to understand both broad and highly localized price movements in near real time, allowing one to study the impact of targeted historical policy changes, or monitor and anticipate the impact of ongoing / future ones. They can also be used to retrieve reliable and up to date estimates of the current value of a specific homes - this can be accomplished via indexation from a prior sale price, or (given that the model outputs all quantiles), by estimating the price percentile in which a home falls - a methodology which also allows for an estimate of the value of homes during planning and construction.

Prior Art

The Distribution Model’s performance characteristics are best understood by contrasting its implementation with status quo methodologies. The generally accepted techniques can be categorized into a handful of main approaches:

  1. Stratified & Mix Adjusted: In this approach, homes are grouped into strata based on their characteristics, and a weighted aggregation of central tendency (typically mean or median) measures is performed. The aggregation weights are calibrated for compositional changes to stock to account for homogeneity across periods.
  2. Hedonic Indices: These indices employ a linear regression model that relates home prices to the attributes of those homes at the time of sale (including the period in which the sale took place). Index values are derived from the coefficients of each time period in the regression. Transaction records for which some relevant attribute of the home are unknown must necessarily be discarded. Although this procedure accounts for the impact of property characteristics on prices, it does so by enforcing a linear and consistent relationship (eg. assuming that an extra bedroom has the same impact on price regardless of floor area, and that this bedroom's impact is consistent over time - as if consumer preference for that bedroom did not change).
  3. Repeat Sales Indices: This methodology also employs a linear regression model. It targets the price difference in paired sales of the same home, in different time periods. This procedure inherently controls for the quality of individual properties. Notably, sales that are not repeated (often representing a significant proportion of available records) must be discarded. An ostensible benefit of this regression is that fewer inputs (home attributes) are required, but many of those attributes must still be known to validate that the home has not significantly changed between paired sales (and so should be excluded).
  4. Mixed Hedonic / Repeat Sales Indices: These indices combine the two prior methodologies to directly target price growth while utilizing pairs of sales that are not strictly identical, but rather highly similar (eg. in the same neighborhood, and with the same number of bedrooms).

To construct a useful index, each of these methodologies requires a comparably large number of transaction records for each time period and region. Consequently, such indices are most often produced only for large metropolitan areas, at quarterly or monthly intervals. Even at these low spatial and temporal resolutions, the indices are often noisy and require the use of smoothing algorithms, which may introduce latency or blur acute price movements.

Further, these techniques necessarily segment records, either to produce strata, or in enforcing a strictly linear relationship between a homes features and price. In doing so, they obscure information regarding relationships that may exist between different strata, regions, or time periods. Effectively discarding complex (non-linear) dependencies in the dataset.

NOTE

In discussing the limitations of these methodologies, we do not mean to disparage them, or to imply that they are not useful for certain applications. For instance, when carefully constructed for a short span of time (< 5 years) and for a large area with many dwellings (eg. a major metropolitan area), a hedonic regression has the benefit of providing an interpretable estimate of the average impact of home characteristics (ie. number of bedrooms, or bathrooms, floor area etc).

We make use of some of the methodologies internally, to facilitate specific analyses, and to act as baselines for the performance of the Distribution Model. In future, we intend to expose some of these baseline models to users.

Methodology

The Concept

The Distribution Model is categorically different to the former approaches.

The model is implemented as an ensemble of Deep Neural Networks (DNNs). This allows it to leverage dense embedding vectors to represent inputs, and dense hidden vectors to represent similarities, or arbitrarily complex relationships between them. Rather than producing a single point estimate (ie. average price), the model produces the distribution of all potential price outcomes.

0.00.10.20.30.40.50.60.70.8↑ Density300k400k1M2M3M4M10MPrice (dollars) →

Distribution of Sydney House Prices

Sydney, in this case, is defined as the ABS GCCSA. Depicted is the distribution of all house prices in the week ending 2023-05-20. The geometric mean median and mean of the distribution are are marked by the vertical lines.

Minimal constraints are placed on the shape of the distribution, allowing them to form whichever shape correctly describes prices within that region. That shape typically resembles a log-normal, with a very long tail.

So as to provide some intuition on why this architecture might supersede the prior methods; consider that correlation tends to exist between price movements in adjacent suburbs, or that autocorrelation tends to exist between adjacent time periods i.e. the home price level in one week will likely not differ dramatically from the preceding or subsequent week. While such information is necessarily discarded by stratification / linear regression models, we believe the DNN architecture facilitates the Distribution Models's ability to discover and encode such patterns*.

Likewise, consider that an extreme (for instance, particularly highly priced) transaction might be the only recorded sale for some time period in a suburb. Such a transaction is perfectly explicable when it is considered as a single sample in the distribution of all home prices within that region, and so it needn’t have undue influence on the price measure as a whole. Targeting of distributions allows the model to incorporate this effect. These distributions, combined with ensembling, provide a measure of the uncertainty inherent in home price movements.

The outcome of this procedure is the granularity and accuracy present in the output indices: a price distribution for each suburb-level region in each week. Any desired central tendency (geometric mean, median and mean), or any quantile, can be taken directly from these distributions to produce smooth indices.

0.70M0.80M0.90M1.00M1.10M1.20M1.30M1.40M1.50M1.60M1.70M1.80M1.90M↑ Price (dollars)2015201620172018201920202021202220232024Date →

Sydney House Price Indices

Indices as timeseries are produced by taking the same point from the predicted distribution of prices in each week. Shown here are three such timeseries; the geometric mean median and mean , covering the decade preceding 2024.

NOTE

*Neural networks are generally considered "black boxes". In the myriad problem domains where they have been demonstrated to achieve state of the art performance, it's often difficult or impossible to determine precisely how they are accomplishing such results. The fact that the Distribution Model outperforms other methods is objectively measured. The idea that it is doing so, in part, by exploiting the existence of such relationships in the dataset is as an educated extrapolation, and so is qualified as a belief.

The Details

NOTE

Our models are subject to continuous improvement and refinement. Over time, we expect to discover incremental improvements to performance (as measured by our evaluation metrics) in response to experimentation and continued tuning of model hyperparameters.

Given this, below we offer only a brief, bulleted overview of the core of model’s technical implementation. This is intended to concisely describe the fundamental architecture, without making reference to specific hyperparameter values, which are subject to change.

A more comprehensive description of this core implementation (together with a set of acceptable hyperparameters) can be found in our technical paper.

  • Each element of the ensemble is an identical DNN with randomly initialized parameters.

  • All input features are encoded as categorical embedding vectors. They include only:

    • the time period.
    • the property type (house or unit).
    • the region.
  • As of the time of writing, the targeted region is always an SA2 or SA3 region, as defined by the ABS in the 2021 ASGS. Outputs for higher geographies are produced via weighted composition of distribution parameters for the constituent regions - where weights are directly proportional to normalized dwelling counts.

  • Each DNN targets the parameters of a mixture of log normal distributions, or Probability Distribution Function (PDF), that describes the distribution of prices for a property of that type, in that region, in that week.

  • Each DNN is fitted independently. The primary objective function is to maximize the likelihood of the observed transaction prices, given the input features.

  • Further to ensembling, we employ multiple standard regularization techniques, including random data augmentation, dropout, and L2 regularization.

  • The outputs of each member of the ensemble, or each member of the ensemble for each constituent region (where weighted composition is being performed) are always pooled to produce a final PDF. The individual or pooled / composite PDFs could be formally described as:

    f(y|x,T=t)=k=1Kαk(x,t;w),ϕ(y;μk(x,t;w),σk2(x,t;w))

    where:

    • x=(r,z) is the vector of the region r and property type c (house or unit).
    • t is the time period.
    • μk(x,t;w), σk2(x,t;w) and αk(x,t;w) are respectively the mean, variance and component weight of the kth mixture component.
    • w represents the parameters (or weights) of the DNN which produced the mixture component.
    • ϕ(y;μ,σ2) is the density of a normal distribution with mean μ and variance σ2.

The Dataset

The dataset used to fit the Distribution Model consists of a comprehensive collection of residential property sales transactions in Australia. The core component of this data is transaction records from the relevant government agency in each state or territory, supplied to Neoval through our agreement with Valocity. To account for latency in data from government sources, the most recent periods are supplemented with transaction records collected from various other sources, including via agreements with Ray White Group, Valocity, or our own collection of publicly disclosed transactions.

The dataset undergoes extensive preprocessing to ensure quality and consistency, with the aim of filtering it to include only transactions that occurred under regular market conditions and at fair market prices. We do not aim to remove statistical outliers (where they are otherwise normal transactions) - as doing so would be counterproductive to modelling the entire distribution of prices.

The data is updated daily to incorporate very recent sales transactions, ensuring that indices are as reflective of as possible of current market conditions.

Stochasticity and Revision

The use of DNNs inherently introduces an element of stochasticity into measurement; model outputs may vary slightly between different fittings. Our use of ensembling is, in part, intended to reduce this stochastic variance to within acceptable tolerances. Further, some level of stochasticity and uncertainty is inherent in property markets themselves, and the probabilistic outputs of the model are specifically designed to capture this.

The indices are necessarily revisionary, given that a complete accounting of all sales transactions is not possible until several months after the relevant period. The commensurate volatility in the model's outputs for the most recent weeks does converge to a stable trend as more data becomes available for those periods. The dataset and model are revised / re-fitted at a daily interval, values for new time periods and revisions are published at a weekly interval.

Evaluation and Metrics

Ultimately, the quality and usefulness of property price indices is entirely contingent on their real world explanatory power, as determined by objective measures. The ongoing performance of The Distribution Model is evaluated using a set of metrics that are computed over the entire dataset after each fitting.

Some key metrics include:

  • Percentage of historical sales above and below percentile thresholds: This metric evaluates the model's ability to output a distribution of prices which can accurately explain historical sales. Over sufficiently long periods of time (given the sparsity of data), a well-calibrated distribution would be expected to, for instance, have approximately 10% of sales in each predicted decile, or to have 50% of sales below the predicted median.
  • Root Mean Squared Error (RMSE) and Mean Absolute Percentage Error (MAPE): These metrics are used to evaluate the indices' accuracy in predicting all prices within the dataset. Baseline values for these metrics (when computed over large geographical areas) are well documented in the literature.
  • Repeat Sales RMSE and MAPE: These metrics evaluate the model's ability to capture price changes over time, using the subset of properties for which 2 or more sales are present in the dataset. This metric is particularly relevant for assessing the model's performance in indexing the price of an individual property over time, ie. using the model for valuation.

We believe deeply in transparency, and in the continuous monitoring and improvement of our products. We are working to make at least the described metrics available to our users, on this website. In the interim, the reader could refer to the set of comparable (global) metrics presented in our technical paper, or contact us with any specific queries.