The lack of information on the contact-tracing app raises questions about its potential for success and future repurposing, says data surveillance and privacy expert.
Only a malcontent could begrudge the nine Australian governments considerable credit for their mostly calm, mostly considered and mostly successful handling of the COVID-19 pandemic. In comparison with other countries, Australia has contained the outbreak and limited the fatalities.
Politicians based their decisions on the advice of public health specialists. The primary factors appear to have been the isolation of moderate-probability infection risks, substantial shutdown for 6-8 weeks, spatial distancing and considerable investment in contact-tracing. After only two months, the emphasis has already turned towards appropriate ways in which to relax the shutdown and distancing arrangements.
Enter the app.
The is that relaxation of workplace and social constraints needs to be accompanied by rapid collection and use of accurate information about the contacts of people who have tested positive to the virus. Those contacts can then be promptly tested, isolated until the results arrive and quarantined if the test is positive.
The app is claimed to deliver on that need, by augmenting the hazy memories of people who’ve been infected, and enabling contact-tracing teams across the country to perform their work even more effectively than before.
But can it do the job for us?
Analysis and early user-experiences of the app raise many .
Bluetooth technology provides only a rough estimate of proximity, and it’s affected by many features of the device and its surroundings. The primary causes of infection are droplets at close range, vapour up to a slightly greater range, and viral material on surfaces, some of which are static and some of which move.
The only criterion that appears to be applied by the scheme, ‘in close proximity for 15 minutes’, is a pretty mediocre means of guessing at infection risk. And it is likely that a great many of the people detected in this way are the obvious contacts to test – those in the same household and workplace as the person who, it has now been discovered, was infectious.
Crunching the numbers of take-up
The relevant population to monitor is about 19 million active people. About 10% of them do not have a mobile, and another 11-12% carry a model, or use a software version, that is not capable of running the app. So over 4 million of the 19 million people cannot participate. And it’s likely that many of the most vulnerable populations are heavily represented in that 4 million – the over-70s, people with relevant prior health conditions, the institutionalised, people in lower socio-economic segments and the homeless.
Of the 15 million potential players, some will never download the app. The early rush of enthusiasm in the first few days had delivered 3 million as at 30 April. Of those, many aren’t operational at any given time. Some are off, and some operating in low-power mode. In some the app is switched off, or Bluetooth is switched off, or the app hasn’t been authorised to use it. And on many the app has been smothered or disabled by the operating system.
Many people infected by the virus are asymptomatic or suffer only minor inconvenience, and hence few of these categories are ever tested. Even among those who have the app installed and operational, and who later test positive, it is unclear what proportion will authorise upload of their data. Perhaps most; but not if ‘bad press’ intervenes.
In the case of a positive test result
Nothing appears to be known yet about the effectiveness and speed of the process whereby uploaded data is interpreted, in order to identify people who are infection risks. And we await indications as to how usable the data is from the public health teams that trace and make contact with the people who are identified in this way. Roughly, it seems that they’ll be able to say:
‘A person we can’t name was carrying a mobile which recorded your mobile as being in close proximity to it for 15 minutes about on ; but we don’t know where you were at the time. Anyway, that person has tested positive for the virus.
‘Have we worried you enough yet to motivate you to get tested ASAP?’
With only about 75 new cases detected during the week ending 30 April, even if 3 million apps had been operational the entire time, data would have been captured for only about 12 of those cases. So contact-tracers in NSW would have had data from the app available for at most 6 of 30 cases, and each of the other States and Territories would have received data for two, one or zero cases.
The Singaporean government was eager about the prospects for its when it was released, but it became . If the Australian app also proves to be a lot less effective than people were expecting, a couple of interpretations offer themselves.
To sustain morale or security theatre?
A positive view might be that the government prioritised the measure because it needed means to sustain public morale. The prevalence of the decade-old motto ‘there’s an app for that’ meant that this was an easy sell, and inexpensive in comparison with alternatives. On one level, placebos don’t work; but on another they do.
A somewhat less positive way of looking at it would be that the government had blind faith in technology’s ability to solve any problem. A current term for this is . This ascribes no ill-will on the government’s part, merely naivety, although perhaps willing naivety.
A negative interpretation in the same vein would be that the government was well aware that the technology could not achieve much of what was being claimed for it. The term coined in 2003 for such measures is .
In that case, the government is benefiting by sustaining the image of doing something about both the pandemic and the social and economic recovery process. And it can limit the amount of public information, so that, if it does prove to be something of a lemon, the problem is not likely to be noticed.
Or is it ‘the thin end of the wedge’?
The possibility also exists that the measure was ‘designed to fail’. The logic of that interpretation is that the government knew the scheme would fall short of what they wanted, but gauged they could get a friendly-sounding design accepted by the public, then parlay public disappointment about its ineffectiveness into one or more adjustments to the design, in order to make it work properly. This is a well-honed technique in consumer marketing, referred to as ‘bait-and-switch’.
There are many ways in which the moderately privacy-protective features of the initial version can be argued to impede the scheme’s utility. For example:
‘To speed up the process, we need the data to be uploaded automatically from the mobile to the cloud-database as soon as the person tests positive, without waiting for consent.’
‘To avoid the delay involved in uploading data, we need everyone’s data pre-loaded onto the cloud-database soon after it’s collected.’
‘We need more data to be collected, especially the mobile’s location when the interaction occurred.’
It is not difficult to find aspects of the government’s behaviour that are consistent with such an interpretation. They failed to disclose the design documents and have been exceedingly slow at releasing any of the source-code. So maybe they are concerned about disclosure of design choices that are all-too-obviously oriented towards facilitating the retrofitting of such features into the scheme.
One hint of this is the pseudonymity feature, which, on closer inspection, may well prove to be illusory. Similarly, the failure to engage with advocacy groups during the Privacy Impact Assessment process may have been because that would have necessitated the opening up of more details about the design than the government wanted to make available.
If expansion of the scheme was the plan, it can be achieved in other ways as well. For example, suspected hot-spots could have static collection-points installed, in the form of devices that contain the same functionality, but in something other than a mobile phone. These might be placed at entries to shopping malls, public transport, gyms and beaches. Public infrastructure such as could be used, or private-sector infrastructure such as digital billboards.
Installing devices that monitor passing mobiles is hardly a novel idea because it is the primary pattern of use of Bluetooth in the marketing field.
Does the app harbour more sinister threats?
All of the variants so far fit within the ‘public health management’ frame, helping to deal with a highly infectious disease that kills people, particularly the old and unwell, and for which no vaccine is available, currently, in the near future, and quite possibly ever.
But it is also necessary to consider possible reasons for deploying the app driven by social control interests as much as by public health motivations. Let’s call this ‘the Dutton scenario’.
The data arising from the app could be expropriated and applied to additional purposes. That is commonly referred to as ‘data creep’. For example, it has been argued that the data opens up . Alternatively, the app itself could be used as a vehicle, or as a template, to address quite different needs. The term for that is ‘function creep’.
A more substantial switch would be the association of the data with each individual’s long-term device-identifier, rather than occasionally changing pseudonyms. Then some indicator could be added of the location where the interaction occurred. This might be the position of a static collection-point.
Alternatively, the app could be modified to pick up the device’s GPS coordinates and include them in the data stored and transmitted. And of course, Bluetooth is a very-short-range (‘proximity’) mechanism. So longer-range transmissions could be used, including WiFi and 3G/4G/5G cellular.
Once the idea has been accepted that every person’s mobile should self-report to social control agencies, the migration of the scheme from tracing to tracking could be almost as easy as inserting a ‘k’ into the word.
And even that is not the endgame. The suppliers of mobiles, currently Apple for iPhones (55% of the Australian market), and Google via its Android software for Samsungs and pretty much everything else, could embed surveillance capabilities into their operating systems. And indeed .
Meanwhile, telco data, in particular that used operationally by base-stations, could be harnessed for surveillance purposes. Israel has already done so. It has done so , but the Knesset can overcome that constraint quite easily. There are reports that are considering much the same re-purposing of telecommunications infrastructure as surveillance infrastructure.
Quo vader, Darth?
So which narrative is the most appropriate description of the current path of the COVIDsafe app? And which will most accurately describe the scenario that we follow in the coming months?
Human rights advocates are watching closely. And the government is doing very little to allay the impression that the scheme is adaptable and extensible. Meanwhile, Dutton and his supporters await their opportunity. The public needs to be very wary.
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor with the at , and a Visiting Professor in the Research School of Computer Science at the Australian ³Ô¹ÏÍøÕ¾ University.