³Ô¹ÏÍøÕ¾

Governments need to focus on AI’s real impact, not get caught up in the hype generated by Big Tech

Statistics Canada recently released estimating which professions are likely to be affected by artificial intelligence in the next few years.

Author


  • David Weitzner

    Associate Professor of Management, York University, Canada

It concludes with an optimistic message for education and health-care professionals, suggesting that not only are they expected to retain their jobs, but their productivity will be enhanced by AI advancements. However, the outlook is grimmer for those in finance, insurance, information and cultural industries, who are predicted to see their careers derailed by AI.

Should doctors and teachers now breathe easy, while accountants and writers panic? Maybe, but not because of the data in this report.

What Statistics Canada offers here is a relatively meaningless exercise. It assumes that it is the technology itself and how well it complements human efforts, not the business models designed to undermine our shared humanity, that is the key determinant. By making this mistake, the report is yet another casualty of buying into corporate-driven optimism at the expense of uglier business realities.

High exposure to AI hype

Corporations pushing new innovations or products that play on our greatest hopes and fears is . The only thing that may be novel is the sheer scale of Big Tech’s hopes for AI impact, which seem to reach every industry.

It’s no surprise, then, that about what industries and sectors will be replaced by AI. Nor is it surprising that Statistics Canada would seek to allay some of those fears.

The study groups jobs into three categories:

  • those with high AI exposure and low complementarity, meaning humans may be competing directly with machines for these roles;
  • those with high AI exposure and high complementarity, where automation could enhance the productivity of the workers who remain essential to the job;
  • and those with low AI exposure, where replacement doesn’t seem to be a threat yet.

The report’s authors claim their approach – examining the relationship between exposure and complementarity – is superior to older methods that looked at or tasks when analyzing the impact of automation on workplaces.

However, by focusing on these categories, the study still buys into corporate hype. These categories of analysis . Over the past few years, new windows have opened up, allowing us a clearer view of the ways Big Tech is rushing to deploy AI. The newly revealed unethical tactics render the predictive categories of exposure and complementarity fairly meaningless.

AI is often driven by people

Recent developments have shown that even jobs with high AI exposure and low AI complementarity are still relying on humans behind the scenes to do essential work. Take Cruise, the self-driving car company for more than $1 billion. Cab driving is a job with high AI exposure and low AI complementarity – we assume a cab is either being controlled by a human driver or, if it’s driverless, by AI.

As it turns out, Cruise’s “autonomous” cabs in California were not, in fact, driverless. There was every few miles.

If we were to accurately analyze this job, there are three categories to consider. The first is for in-car human drivers, the second is remote human drivers and the third is autonomous AI-driven vehicles. The second category makes complementarity fairly high here. But the fact that Cruise, and likely other tech companies, raises a whole new world of questions.

A similar situation emerged at Presto Automation, a company specializing in AI-powered drive-thru ordering for chains like Checkers and Del Taco. The company described itself as one of the biggest “” in the industry, but it was revealed that much of its “automation” is driven by .

Software company . It once charged customers based on how often the software was used to try to resolve customer problems. Now, Zendesk only charges when its proprietary AI completes a task without humans stepping in.

Technically, this scenario could be described as high exposure and high complementarity. But do we want to support a business model where the customer’s first point of contact is likely to be frustrating and unhelpful? Especially knowing businesses will roll the dice on this model because they won’t be charged for those unhelpful interactions?

Scrutinizing business models

As it stands, AI presents more of a business challenge than a technological one. Government institutions like Statistics Canada need to be careful not to amplify the hype surrounding it. Policy decisions need to be based on a critical analysis of how businesses actually use AI, rather than by inflated predictions and corporate agendas.

To create effective policies, it’s crucial that decision-makers focus on how AI is truly being integrated into businesses, rather than getting caught up in speculative forecasts that may never fully materialize.

The role of technology should be to support human welfare, not simply reduce labour costs for businesses. Historically, about job displacement. The fact that future innovations may replace human labour is not new or to be feared; instead, it should prompt us to think critically about how it’s being used, and who stands to benefit.

Policy decisions, therefore, should be rooted in accurate, transparent data. Statistics Canada, as a key data provider, has an essential role to play here. It needs to offer a clear, unbiased view of the situation, to make informed decisions.

The Conversation

/Courtesy of The Conversation. View in full .