AI Explainers: What AI news should I know about?

From viral images of Pope Francis out and about in a trendy Balenciaga puffer coat, to clips of controversial podcaster Joe Rogan interviewing Prime Minister Justin Trudeau, news of AI-generated humbuggery are common on our social feeds these days.

Everyday, we hear about new ways in which AI technology has started to impact daily life, from the way we search for information on the internet, to chatbots finding their way into our friend lists on social networking apps like Snapchat.

In an era when technical terms such as big data technology, machine learning and generative AI are flung about with reckless abandon, it can be hard to make sense of the noise.

As we learn more about ways in which AI can be used (or misused,) here are explainers and stories from Star journalists about artificial intelligence, and its consequences in our lives.

AI programs become capable of acting on their own, improving themselves over time

New, autonomous “AI agents” can act on their own, and even rewrite their own code, the Star’s Kevin Jiang explains.

Launched this April, Auto-GPT — an artificial intelligence program capable of acting on its own and improving itself over time — has sparked the rise of autonomous “AI agents” that some believe could revolutionize the way we work and live.

Unlike current systems like ChatGPT, which require manual commands for every task, AI agents are capable of assigning themselves new objectives to work on with the aim of reaching a greater goal, and without much need for human input — an unprecedented level of autonomy for AI models like GPT-4.

Is AI coming for your job? Which workers will be replaced first? How can you adapt?

In the coming years, artificial intelligence is expected to have far-reaching impacts across almost every job sector, and concerns in white collar professions have been running high.

These are the professions most likely to be impacted first — including some you may not expect, like drivers.

That said, some experts say we now have a tremendous opportunity to embrace the technology early on and get ahead of the changes.

AI generating fake news stories attributed to real journalists

Journalists have raised concerns that ChatGPT has fabricated articles and headlines, but attributed them to real journalists and publications.

Journalists have raised concerns that ChatGPT has fabricated articles and headlines, but attributed them to real journalists and publications.

The Star’s Joanna Chiu demonstrates this by asking an AI-chatbot to generate a list of articles by columnist Shree Paradkar.

The chatbot included headlines of columns which Paradkar never wrote, reflecting wider concerns about the abundance of fake references dished out by popular chatbots including ChatGPT.

Experts worry that with rapidly evolving technology, people may not know how to identify false information.

In similar news, a German magazine’s editor-in-chief was recently fired after publishing an AI-generated ‘interview’ with Formula One legend Michael Schumacher.

Schumacher’s quotes were reportedly generated by Character. AI, a large language model that lets one converse with an AI that mimics notables like Elon Musk, Joe Biden or Albert Einstein.

AI tech that can mimic any voice after just three seconds

Microsoft’s cutting-edge text-to-speech AI language model VALL-E, claims it can mimic any voice — including its emotional tone, vocal timbre and even the background noise — after training using just three seconds of audio.

The development has some experts sounding alarm bells over the technology’s potential for misuse; through VALL-E and other generative AI programs, malicious actors could mass produce audio-based disinformation at unprecedented scales, sources say.

AI has also increasingly been used to create musical covers in the styles of artists, who never actually recorded those renditions.

AI-generated music of Drake on TikTok, “including a musical collaboration with The Weeknd,” has been repeatedly taken down, but having gone viral it may never disappear.

The future of hyperrealistic AI deepfakes

Google's DreamBooth research has led to a breakthrough in AI art, enabling anyone to create digital replicas of real people. Experts are concerned over its implications for misuse. Star reporter Kevin Jiang used the photo of himself on the left to produce the fake image on the right.

Google Dreambooth has advanced the field of AI art, giving it the ability to study what individuals or objects look like, then synthesizes a new “photorealistic” image of the subject.

A Star reporter tested out the images using his own photo and there is indeed quite the resemblance.

The development has some AI researchers and ethicists concerned about its use in so-called “deepfakes” — media manufactured via AI deep-learning to depict fake events.

People are already manipulating the faces of unconsenting women onto the bodies of pornstars, having political figures appear to deliver statements that never occurred in reality and creating impossible ads featuring dead celebrities.

An AI-generated image of Pope Francis in a Balenciaga jacket fooled the internet this week, convincing many it was real. Could you tell real from fake?

The viral photo of Pope Francis in a puffer jacket, that fooled the internet, is another example of such an image. It was created by an artificial intelligence program called Midjourney.

Canadian artists are fighting against AI

Visual artists’ work is being gathered online and used as fodder for computer imitations.

It can seem as though AI software is creating the images from nothing, artists like Yang say. But in reality, it is assembling them using fragments of actual images gathered or “scraped” from the internet and stored in massive databases.

Experts say the legalities around AI art are in a grey area in Canada. While the creators of the images own the copyright to them in Canada, there has yet to be a court case involving their use in AI imaging.

Deepfake technology has sinister implications for governments, businesses and individuals

A video circulated in March 2022 of Ukrainian President Volodymyr Zelenskyy calling for Ukrainians to lay down arms triggered a collective gasp across the world. That was, until the video was debunked as a “deepfake.”

A similarly fake video of controversial podcast host Joe Rogan interviewing Prime Minister Justin Trudeau was posted on YouTube in 2023, and made the rounds online.

The use of this technology also opens the door to more bad-faith use, including spam and scam calls and fraudsters bypassing voice identification systems.

ChatGPT and its “anti-woke” AI rival from Elon Musk

ChatGPT is an AI-powered chatbot capable of natural-sounding conversation.

In March 2023, OpenAI, creator of the viral chatbot, unveiled its most advanced artificial intelligence model yet, named GPT-4. This model received a boost to its creative power, is capable of analyzing images and can process over 25,000 words — allowing for long-form content creation.

Billionaire Elon Musk has accused OpenAI of training its language models to be “woke” after the lab installed safeguards on ChatGPT preventing the chatbot from producing potentially offensive text.

In response, Musk is reportedly working on an “anti-woke” AI rival to ChatGPT after a year of railing against perceived censorship and bias in the high-profile chatbot.

Not all is doom and gloom in the world of artificial intelligence. Several people are harnessing AI’s powers for good too.

AI being used to discover a potential new cancer drug, predict medical staffing needs

Drug discovery can be a long, expensive affair — but with advances in artificial intelligence, researchers from Insilico Medicine and the University of Toronto used AI to to identify a weak point in liver cancer cells and synthesized a drug to attack it, in less than a month.

At Unity Health in Toronto, employees are using AI to predict staffing needs in emergency departments. The team has also created an early warning system that alerts doctors and nurses if a patient is at risk of going to the ICU or dying.

Navigating love and relationships using AI

AI companies have infiltrated the dating space, peddling algorithmic matchmakers and even an automated pickup artist.

Founded a year ago, has signed up 1,000 people — and made just 40 matches so far, according to one of its founders. This is partly because the algorithm can take months to make a single match.

Back in 2017, the Star wrote about AI-powered personal assistant “Thistoo” that streamlined legal processes for Canadians going through divorce. Although the company is now defunct, similar services are available online even today.

How AI will revolutionize the food sector

The Star’s contributing columnist Sylvain Charlebois muses how AI and smart labels could allow more data about our food’s shelf life at home before we throw anything away.

Up the food chain, AI algorithms are already helping farmers analyze soil, climate, and crop data to predict crop yields, optimize irrigation and fertilization schedules, and improve the efficiency of farming practices.

Using machine learning tech in policing can be problematic

In 2020, a New York Times investigation into AI-powered facial recognition technology used by law enforcement agencies triggered a host of privacy violation investigations across Europe and North America.

In Canada, federal and provincial regulators launched investigations into whether Clearview AI, a company that makes facial recognition technology used by at least four Ontario police forces, breaks Canadian privacy laws.

Soon after, Clearview and the RCMP severed ties, a subsequent Star Investigation uncovered how facial recognition app Clearview AI had been used far more widely in Canada than previously known.