³Ô¹ÏÍøÕ¾

Data Quality Assurance In 2023 Census

Data quality assurance in the 2023 Census outlines the quality rating scale and principles used by Stats NZ to assess the quality of data from the 2023 Census and determine whether it is fit for purpose and suitable for release.

Background

Stats NZ has used a combined census model to produce the 2023 Census dataset. This means we are using data from census form responses and alternative sources to produce a fit-for-purpose dataset.

The alternative sources are:

  • data from administrative (admin) data sources to add people to the census count – this is known as admin enumeration; the census count is made up of census responses, plus people added through admin enumeration
  • data from the 2018 and 2013 Censuses to add missing characteristics of people counted in the census, such as age and country of birth
  • data from admin data sources to add missing characteristics of people counted in the census, such as ethnicity and income
  • statistical imputation methods to provide realistic values for missing characteristics of people counted in the census, when other alternative sources are not available.

It is important for us to determine that the quality of the final dataset is fit for purpose and an accurate representation of New Zealand society. To do this the aim has been to retain consistency and comparability with the 2018 and 2013 Censuses quality rating scales to effectively differentiate the quality of 2023 Census data. The 2023 Census is retaining the format, structure, and methodology used for the 2018 Census quality rating scale, and following the principles used for publication of 2018 Census data, to maintain a consistent approach to determining and reporting on quality of the 2023 Census data.

Quality assurance

When assessing the quality of 2023 Census data, we use a phased approach to identify and assess data quality concerns.

Our data experts conduct in-depth analysis of the census data to identify any potential quality issues and ensure any recommendations for the release of data are evidence-based.

Each census variable goes through a comprehensive set of checks and problem resolution processes before it is assessed for quality.

Peer and panel review processes validate that the data quality checks have been undertaken at the correct level for each variable, quality ratings have been correctly assessed, and commentary to inform our output products meets the needs of data users.

2023 quality rating scale

Census variables are assessed and evaluated to produce a guideline rating using the quality rating scale.

The quality rating scale was updated for the 2018 Census to accommodate the use of a combined census model. This has been retained for the 2023 Census with a change to the label of metric 3 to ‘accuracy of responses’, to better reflect the aspects of quality covered by the metric.

summarises the quality ratings for the 2018 Census variables. For context, the ratings were varied, with most variables receiving ratings of ‘high’ or ‘moderate’. In most cases where variables received ‘poor’ or ‘very poor’ ratings, this was due to the level of missing information for these variables as a result of low response to the 2018 Census and a lack of appropriate alternative data sources for missing information.

The quality rating scale is made up of three metrics that contribute to the overall quality rating for a variable. The description of the data quality of a variable will include both the overall rating, together with a rating for each of the individual metrics within the scale.

The 2023 quality rating scale is made up of three metrics:

  • metric 1 – data sources and coverage
  • metric 2 – consistency and coherence
  • metric 3 – accuracy of responses.

We assign an overall rating to each variable by taking the lowest score that variable has received from the three metrics, across the range.

The range for metric 1 is:

  • 98-100 = very high
  • 95-
  • 90-
  • 75-

The range for metrics 2 and 3 is:

  • very high
  • high
  • moderate
  • poor
  • very poor.

Quality ratings for 2023 Census variables will be published in the 2023 ‘Information by concept’ pages in DataInfo+ along with the ratings for each metric and commentary on specific aspects of data quality related to the variable across the three metrics. The overall quality rating is intended to be a summary indicator of data quality of a variable.

Metric 1: Data sources and coverage

This metric calculates a score by rating the overall quality of the data sources used for a census output variable. This aims to:

  • give customers clarity around what sources have gone into the combined output for a census variable
  • show how the rating given to a source (which is based on the quality of the source) will then impact the total score (and quality) of a variable
  • calculate an approximation of ‘missingness’ (gaps in data) and uncertainty of output values for a census variable.

To calculate a score for a variable, each source that contributes to the output for that variable is rated and multiplied by the proportion it contributes to the total output.

Table 1 contains the ratings for each data source.

Table 1

Data source quality rating

Item source indicators

Rating

2023 Census

1.00

Historical census (2013 and 2018)

Rating differs for each variable

Admin data

Rating differs for each variable

Statistical imputation

Rating differs for each variable

No information

0.00

The rating for a valid census response is defined as 1.00. Ratings for other sources are the best estimates available of their quality relative to a census response. We recognise that census responses will include some errors due to, for example, respondent misunderstanding or census processing errors. The 2013 and 2018 Censuses and administrative source ratings reflect measured consistency with the 2023 Census. These ratings for alternative sources may be conservative in some situations where previous census or admin data is likely more accurate than the 2023 Census response.

We calculate the ratings for admin data sources by comparing the 2023 Census received responses with the data from the admin source(s), with a value being derived from the match rate between the two sources.

Statistical imputation sources are given specific ratings based on comparative testing for each variable of 0.4, 0.6, 0.8, or 0.9.

Editing, data sources, and imputation in the 2023 Census will provide more information when published on 29 May 2024.

‘No information’ indicator applies to both census form responses and admin enumerations when a valid value has not been able to be supplied by any of the data sources.

Bands for data sources and coverage ratings

The bands we use for metric 1 are the same as those used in the 2018 Census:

  • 0.98-1.00 Very high quality
  • 0.95-
  • 0.90-
  • 0.75-
  • 0.00-

The following examples show how this would work for two different variables.

Example 1

2023 metric 1 rating = High quality

Source

Rating

Percent
of total

Score
contribution

2023 Census

1.00

83%

0.83

Historical census

0.98

8%

0.08

Admin data

0.96

4%

0.04

Statistical imputation

0.57

5%

0.03

Total

100%

0.98

Example 2

2023 metric 1 rating = Moderate quality

Source

Rating

Percent

of total

Score

contribution

2023 Census

1.00

94%

0.94

No Information

0.00

6%

0

Total

100%

0.94

Metric 2: Consistency and coherence

We rate the level of consistency and coherence in the data on:

  • comparability with the expected trends
  • comparability with other sources
  • contribution of other sources to the census data for this variable.

The ratings account for changes occurring for variables in the 2023 Census as a whole, including the use of admin data and, in some cases, the change in question or concept. In some cases, census data may be moving away from expected time series trends, due to methodological changes that have brought the data closer to the ‘real world’ situation, by addressing historical issues, or biases within census coverage.

For new or changed variables where there is no previous census data for comparison, we used other data sources as the primary source of comparison. These may only be comparable at a national level.

Explainable change could be the result of real-world change, incorporation of other sources of data, or a change in how the variable has been collected.

Variables are assigned a priority level which determines how the variable is assessed for consistency.

Priority 1 variables are assessed for consistency:

  • at the lowest level of classification by statistical area 2 (SA2) geography, age (five-year groups), gender, Māori descent, and level 1 ethnic groups.

Priority 2 variables are assessed for consistency:

  • at the lowest level of the output classification by regional council and territorial authority local board (TALB) geographies, age (five-year groups), gender, Māori descent, and level 1 ethnic groups
  • at the second highest level of the output classification at SA2 geography.

Priority 3 variables are assessed for consistency:

  • at the lowest level of output classification at a national level geography, age (five-year groups), gender, Māori descent, and level 1 ethnic groups
  • at the highest level of the output classification at SA2 geography.

provides the priority levels for variables in the 2023 Census.

Five detailed descriptions guide our assessment and categorisation of variables for this metric:

  • Very high – Variable data is highly consistent with expectations across all consistency checks.
  • High – Variable data is consistent with expectations across nearly all consistency checks, with some minor variation from expectations or benchmarks that makes sense due to real-world change, incorporation of other sources of data, or a change in how the variable has been collected.
  • Moderate – Variable data is mostly consistent with expectations across consistency checks. There is an overall difference in the data compared with expectations and benchmarks that can be explained through a combination of real-world change, incorporation of other sources of data, or a change in how the variable has been collected.
  • Poor – Variable data is not consistent overall with expectations across one or more consistency checks. There is an overall difference in the data compared with expectations and benchmarks. Where this difference occurs, this cannot be fully explained through likely real-world change, incorporation of other sources of data, or a change in how the variable has been collected.
  • Very poor – Variable data is highly different from expectations across all consistency checks. There is a large overall difference in the data compared with expectations and benchmarks that cannot be explained through real-world change, incorporation of other sources of data, or change in how the variable has been collected.

Metric 3: Accuracy of responses

This metric relates to the accuracy of final output values produced from respondent data. This includes aspects such as coding, level of detail/classification, accuracy of responses, and any other specific quality issues that may have been identified in problem reports.

We used the same overall approach that was used in the 2018 Census for this metric. The ratings are:

  • Very high – Data has no data quality issues that have an observable effect on the data. The quality of coding is very high. Any issues with the variable appear in a very low number of cases (typically less than a hundred).
  • High – Data has only minor data quality issues. The quality of coding and responses within classification categories is high. Any issues with the variable appear in a low number of cases (typically in the low hundreds).
  • Moderate – Data has various data quality issues involving several categories or aspects of the data, or an entire level of a hierarchical classification. The data quality issues could include problems with the classification or coding of data, such as vague responses resulting in coding issues, or responses that cannot be coded to a specific (non-residual) category, thereby reducing the amount of useful, meaningful data available for analysis.
  • Poor – Significant data quality issues emerged during evaluation. Data is considered fit for use but there are limitations on how it can be used and interpreted. There are significant issues with respondent interpretation, coding, and/or classification problems.
  • Very poor – Major data quality problems exist. Data does not reflect reality due to respondent misinterpretation, coding, and/or classification problems.

Derived variables

We create some census output variables from responses to individual questions or from a combination of responses given to two or more questions on the census form. These are called derived variables.

For example:

  • highest qualification is derived from highest secondary school qualification and level of post school qualification
  • tenure of household is derived from a number of questions from the census dwelling form.

Quality ratings for derived variables are dependent on the quality of the input variables. Where quality ratings for derived variables have been produced, we consider the quality of the various input variables and the degree to which they contribute to the derived variable.

Quality ratings for units

Quality ratings will also be produced for the units and counts that make up the 2023 Census dataset. These will cover the units of ‘Individuals’, ‘Dwellings’, ‘Families’, ‘Extended Families’, and ‘Households’, and the counts of ‘Absentees’ and ‘Number of Census Night Occupants’.

Quality ratings for units and counts will not have specific ratings for all three metrics, but the assessment of quality of units will consider all relevant dimensions of quality for their overall quality rating. Quality ratings for ‘Individuals’, ‘Dwellings’, ‘Absentees’, and ‘Number of Census Night Occupants’ will primarily focus on consistency and coherence, while ratings for ‘Families’, ‘Extended Families’, and ‘Households’ will focus on the quality of data sources, and consistency and coherence.

Principles on the output of census data

Publication of 2023 Census data is informed by some broader principles on the release of data. These are:

  • we will facilitate the release of as much data as possible
  • we will be transparent about the methods used to determine data quality
  • we will be consistent with the 2023 Census data quality management model
  • we will recognise and meet Stats NZ’s obligations to Māori under the Treaty of Waitangi
  • published information will include sufficient documentation for users to understand the data and its quality.

provides the 2023 Census data quality management model.

provides more information on how the 2023 Census Programme set out to meet its commitments under the Treaty.

provides information on these principles.

Decisions on the output of census data with lower quality ratings (‘poor’ and ‘very poor’) follow additional principles, which balance considerations of data access and informed used of census data. These are:

  • customers should be informed thoroughly about the limitations of data rated as poor and very poor quality
  • we will publish data rated as poor quality, with appropriate supporting metadata
  • we will provide data rated as very poor quality only when there is a way to communicate more fully with a customer about the data quality limitations (for example, via the Data Lab or customised data requests)
  • data rated as very poor quality will not be published on our website.

If specific categories within a variable are assessed as not being fit for purpose, or if data is assessed as not being fit for purpose at a subnational or subpopulation level, this will be communicated in tables and / or metadata as appropriate.

ISBN 978-1-99-104996-4 9 (online)

Enquiries

/Stats NZ Public Release. View in full .