Methodology · 7 min read · 2026-05-01
Why we publish a public prediction ledger
Most property research keeps its forecasts private. Comfortable for the publisher, useless to the buyer. We do the opposite. Every dated recommendation, graded held / missed / pending after twelve months, in public.
The problem with private forecasts
Most Australian property research is comfortable for the publisher and useless to the reader. A buyer's agent picks ten suburbs as their "growth tips" for the year. Twelve months later, two perform brilliantly, six are flat, two underperform. The ones that worked get republished as "as we predicted." The ones that didn't quietly disappear from the next year's list.
This isn't dishonesty so much as gravity. Anyone who publishes verdicts has an incentive to remember the wins and forget the misses. Without an external auditor, and there isn't one for AU residential property research, the whole industry runs on a self-curated highlight reel.
What a public ledger does differently
We publish every dated recommendation propautopilot makes. Twelve months later, each is graded against the outcome that actually landed:
- Held. Outcome lands within ±5% of the prediction's primary metric.
- Missed. Outcome lands more than 5% in the wrong direction.
- Pending. The twelve-month window hasn't elapsed yet, or the primary source for the outcome hasn't published the relevant period.
The ledger is append-only. Verdicts can't be edited after the fact. Misses stay public alongside the original reasoning, so future readers can see what the system got wrong and why. The grading rubric is documented at /methodology. Held / missed isn't a soft judgment, it's a defined ±5% threshold against a named primary-source metric.
Why most competitors won't do this
Two reasons.
First, accumulation. A new entrant publishes a methodology paper this week and cites their grading rubric. That's table stakes. But a credible twelve-month-graded prediction history takes a year to begin and three years to demonstrate. Year-1 entrants have zero graded entries; year-3 entrants have tens of thousands. Whoever started accumulating earliest has the most defensible record. There is no shortcut.
Second, asymmetry of pain. A research desk that publishes verdicts and grades them publicly takes the L when calls miss. Most buyers don't expect a 100% hit rate (they understand markets) but a research desk has to actually accept being wrong in public. Most don't.
The asymmetric upside
Buyers who want to evaluate research desks before they trust them can audit our ledger directly. Misses don't disqualify us. The absence of a public ledger does. A 65% twelve-month-held rate (industry-realistic for property forecasts) demonstrated in public is more valuable to a buyer than a "100% accuracy" claim with no underlying record.
Three years from now, the ledger compounds into a research moat that doesn't depend on marketing. The system gets better by being audited, not by being polished.
Read further
- The public prediction ledger. Every dated verdict.
- The methodology paper. Full grading rubric + sources.
- The glossary. Terms cited in our verdicts, defined.
More from the research desk
- How seven specialists read a suburb
- What 'editorial transparency' means in property research
- How to read a property auction in Australia
- The 49 metrics that actually matter for a suburb scorecard
- Stamp duty in Australia 2026: state-by-state guide
- The hidden costs every Australian property buyer underestimates
- Why do Australian property prices rise faster than wages?