Canada’s voluntary AI code of conduct is coming — not everyone is enthused

Companies working with AI in Canada are being presented with a new voluntary code of conduct around how advanced generative artificial intelligence is used and developed in this country.

And while there has already been support from the business community, there are also concerns being raised that it could stifle innovation and the ability to compete with companies based outside of Canada.

Advanced generative artificial intelligence often refers to the types of AI that can produce content. ChatGPT is a popular example, but most systems that generate audio, video, images or text would count as well.

Companies that sign onto the code are agreeing to multiple principles, including that their AI systems are transparent about where and how information they collect is used, and that there are methods to address potential bias in a system.

In addition, they agree to human monitoring of AI systems and that developers who create generative AI systems for public use create systems so that anything generated by their system can be detected.

A man at a podium
Industry Minister François-Philippe Champagne has announced a voluntary code of conduct for generative AI developers in Canada. (Justin Tang/The Canadian Press)

“I think that if you ask people in the street, they want us to take action now to make sure that we have specific measures that companies can take now to build trust in their AI products,” said Industry Minister François-Philippe Champagne at a conference focusing on AI in Montreal last Wednesday.

Legislation such as Bill C-27, which would update privacy legislation and add rules governing artificial intelligence, is still working its way through Parliament.

Hence, the voluntary code would give another method for the federal government to set out rules for companies to make products people can trust before they even use them, or whether they opt to use them at all.

BlackBerry, Telus among signatories

Canadian tech company BlackBerry, which uses generative AI in cybersecurity products, is an initial signatory to the voluntary code.

If the highway didn’t havedirections and traffic lights, things would be chaos. And I think that’s how I view it … in terms of trying to bring trust.– Charles Egan, CTO of BlackBerry

According to the company’s chief technology officer, the idea is to make sure there is trust for an AI product before it’s even used, and that’s a bit of a culture shift for some.

“People always deploy mobile phones and computers and networks, and then we try to apply trust after the fact,” said Charles Egan in an interview with CBC News.

“I think AI, especially generative AI, has fantastic potentials … so if we put some guidelines in place, we can enjoy the benefits and reduce some of the potential pitfalls of this generative AI explosion that we’re all experiencing,” said Egan.

Egan pointed out that one advantage he and his company see to the Canadian code of conduct is that it mostly imposes requirements on AI developers, and he feels this…

Read More: Canada’s voluntary AI code of conduct is coming — not everyone is enthused

Leave a comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.