³Ô¹ÏÍøÕ¾

Urgent call for action to ensure responsible use of AI

Bias, the c-word, regulation, and education in relation to artificial intelligence were all on the cards at a Business School panel discussion in collaboration with the Institute of Directors.

the expert panel
Experts: Richard McLean, Helen Lu, Frith Tweedie and Alex Sims with NBR journalist and moderator Will Mace.

A panel of experts has stressed the need for boards and executives to take accountability for responsible AI practices in their businesses and organisations.

“If you walk away from this event with one impression, it’s that you really need to start developing an artificial intelligence policy,” Alex Sims, associate professor commercial law, told a room full of business leaders, academics and professionals at a panel discussion hosted by the Business School and the Institute of Directors this week.

When moderator NBR journalist Will Mace asked the more than 70 attendees who had an artificial intelligence policy, only a couple raised their hands.

The panel, which also included the FinTech specialisation lead of the Master of Business Analytics programme Dr Helen Lu, and Richard McLean, a co-founder and executive chair of ElementX, a New Zealand-based AI engineering company, went on to emphasise the significance of aligning AI strategies with organisational goals.

They highlighted the need for organisations to evaluate the problems they aim to solve and assess the technological solutions available while also considering the regulatory risk landscape. McLean urged directors to keep well-informed about the benefits and risks of AI, emphasising that responsible adoption is crucial to avoid potential damage to reputation and trust.

Various governance models and new roles were also discussed, including the emerging trend of Chief AI Ethics Officers and AI review boards to drive responsible AI practice.

The c-word

“There’s obviously a lot of fear around ChatGPT being used by students, and the c-word… cheating,” said moderator Will Mace as he shifted the panel discussion towards the use of AI in education.

Is it logical that using ChatGPT at work is called productivity, while at school, it’s called cheating?, asked an audience member.

In response, Sims highlighted the equity-related benefits of tools like ChatGPT for students and workers affected by learning challenges such as dyslexia or those who speak English as a second or third language.

“Our task at the University is to assess people’s knowledge, and that means changing our assessment tasks. We’re in a transition period, but we can do it. It’s not a case of saying you can’t use tools like ChatGPT because that’s not setting our students up for the world of work.”

Lu said she encourages the use of ChatGPT in certain assessments and noted that there’s also room for more traditional forms of assessments. She also said that in order to use ChatGPT well, a student typically needs to have a good hold on the subject matter in order to input strong prompts.

ai stock image

Regulation is coming

A pause on the development of AI like ChatGPT wouldn’t be practical or useful, agreed the panellists, who reiterated that while regulatory measures are being developed, organisations need to implement policies to govern the use of such tools.

“If we took a pause, would those who are working on regulation speed up, and would some of these tech companies actually pause? I don’t think so,” said Lu.

Tweedie said, “Regulation is slowly making its way here. The EU Artificial Intelligence Act is expected to come into force in the next couple of years. It introduces massive fines of up to 6 percent of global annual turnover, or 30 million euros, for those found in breach of the Act.”

The Act deems various types of AI uses, like facial recognition in public places and social scoring unacceptable.

Meanwhile, AI is already being developed to regulate AI, said McLean, referring to an IBM AI tool trained on the EU legislation that could guide executives and boards on responsible and ethical use.

“These kinds of tools will be able to push information back up towards the executives and the board to show whether you’re drifting or starting to see bias.”

Bias

Bias in AI has emerged as a significant concern and its implications are far-reaching, agreed the panellists.

One notable example, said Tweedie, was Amazon’s AI recruitment tool, which utilised data to identify successful candidates. Unfortunately, the tool exhibited a bias towards male candidates, even though gender was not explicitly included as a key variable. Studies have also shown that facial recognition technology performs poorly, especially when it comes to accurately identifying women of colour.

These examples, said the panellists, illustrated the complex nature of bias within AI systems and the urgent need for businesses and organisations to not only be aware of potentially harmful outcomes but also work to mitigate them.

/University of Auckland Public Release. View in full .