With all due respect to Gordon Lightfoot, who died earlier this week at 84 years old, this is not a tribute to his brilliant musical career. This is a heads-up about AI and its potential to change our lives in strange ways.
Think I’m exaggerating? Geoffrey Hinton, known as “The Godfather of AI,” recently resigned from Google and is warning of the dangerous path on which AI is leading us.
Here’s Hinton on Twitter:
And here’s Hinton replying to a comment on his tweet:
According to an article on dataconomy.com, “Geoffrey Hinton is one of the pioneers of deep learning, a branch of AI that uses neural networks to learn from large amounts of data and perform tasks such as image recognition, natural language processing, and speech synthesis. He is credited with developing some of the key algorithms and concepts that underpin deep learning, such as capsule networks.”
The article quotes Hinton as saying, “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have… So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
How might this impact education, journalism, manufacturing, and so on? Can AI eventually decide its own values and morals? As Hinton notes, the fact that no one knows the answers to such questions is the real concern.
Here are some actual AI-related developments that show where we might be headed:
At least 49 websites have been identified as producing AI-generated news content that is often factually inaccurate. “The writing on these content farms is generally both boring and repetitive, hallmarks of AI-generated content. It's also often filled with blatant falsehoods. For example, in one particularly egregious instance, a site called CelebritiesDeath.com claimed that president Joe Biden had died on April 1, 2023.”
At IBM, as many as 7,800 jobs could be replaced by AI over the next few years. “Hiring specifically in back-office functions such as human resources will be suspended or slowed [and] 30% of non-customer-facing roles could be replaced by AI and automations in five years.”
And the scariest of all, AI appears to be able to read your mind. “Researchers from the University of Texas at Austin have created a mind-reading AI system that can take images of a person’s brain activity and translate them into a continuous stream of text.” Of course, there are potentially good uses for this process. “Called a semantic decoder, the system may help people who are conscious but unable to speak, such as those who’ve suffered a stroke.”
“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said Jerry Tang, lead author of the (mind-reading) study. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
As much as AI can help people, if placed in the wrong hands it can do great harm. Personally, I don’t want anyone to know what’s going on in my mind, and I don’t want to know what anyone is thinking about me. I also don’t need to read trash journalism produced by a bot.
Who knows what else could happen with artificial intelligence? Should we simply wait and see, or take Hinton’s advice and proceed with extreme caution? If you could read my mind, you’d know my answer to that question.
Proceed with great caution!