A Brief History in AI

With the launch of our client Canada.ai’s collaborative online hub, our tech PR manager decided to take a look back at the rise of artificial intelligence and how we got to where we are today.


This article was written with the help of Canada.ai’ave

AI timeline.

When I was a teenager, perhaps the single largest realization I had was that, if mixed in the right quantities, Kraft Dinner and Mr Noodles perfectly complemented one another. Why does that matter? It doesn’t. Save the fact that it puts into perspective the magnitude of Geoffrey Hinton’s teenage epiphany: that computers, perhaps with the help of neural networks (a term he would later coin) could think like humans. And, as you might have guessed, only one of us went on to become a world renowned scientist, credited with creating the foundation upon which nearly all major research of artificial intelligence and machine learning would be done.

Hinton was born in the UK to academic parents—according to a recent Toronto Life profile, Hinton’s mother gave him two choices growing up: “Be an academic or be a failure.” He chose the first, and as a twenty-something, armed with a PhD from the University of Edinburgh, he and his wife moved to the US so Hinton could continue his research at Carnegie Mellon. Eventually, he was offered a role at the Canadian Institute for Advanced Research (CIFAR), which is where he would work for a number of years and continues to be an advisor there today. Like anything that stays in Canada for long enough, we eventually claimed him as our own.

Hinton is often referred to as the “godfather of artificial intelligence” but the term itself was founded by a man named John McCrathy—the “father of AI”—in 1956, along with his colleagues, Martin Minsky (Harvard University), Nathaniel Rochester (IBM Technologies), and Claude Shannon (Bell Laboratories). And while Hinton undoubtedly pushed the study of artificial intelligence and machine learning into the mainstream, since it’s coining there have been a number of noteworthy advances made in the study of AI.

Take for instance the debut of Shakey the robot in 1966, the first mobile robot that used artificial intelligence to reason with its surroundings. Shakey was a product of the Stanford Artificial Intelligence Laboratory, and could perform tasks that required planning, route-finding, and the rearranging of simple objects. Shakey was named for his gait; he/she apparently walked with a slight wobble. In the 1970s, the world saw its first major advances in speech recognition—a component of a now vital field in AI referred to as natural-language processing (NLP)—by the United States’ Department of Defense’s DARPA Speech Understanding Program. This program eventually led to the creation of Carnegie Mellon’s HARPY system that was able to understand 1011 words, the approximate vocabulary of a three year old.   

After a brief period in the mid-1970s when the lack of interest in AI research declined, known as the first “AI Winter”, the 1980s saw its resurgence with the creation of the the Boltzmann machine by Geoffrey Hinton and Terry Sejnowski. The Boltzmann machine is “a network of symmetrically connected neuron-like units.” It’s an example of a neural network, and it was used by Hinton and Sejnowski to facilitate deep learning across a variety of use cases like sentiment analysis, voice recognition and fraud detection.

The onset of the 1990s saw the second AI Winter. The lofty goals of the 1980s were not met, systems proved too expensive to maintain, unable to learn and difficult to update. But, like a pendulum, Y2K saw things shift once again, especially at home in Canada: the University of Alberta launched the Alberta Innovates Centre for Machine Learning (AICML) in 2002; in 2004, CIFAR launched the Neural Computation and Adaptive Perception program, which worked to uncover how the brain converts sensory stimuli into information; and in 2006 came a breakthrough moment in deep learning. Geoffrey Hinton and colleague, Simon Osindero, developed an algorithm that allowed individual layers of neurons to be taught in a more effective way.

From there, things took off. 2010 saw a group of researchers from the University of Toronto—Navdeep Jaitly, Abdel-Rahman Mohamed, George Dahl and Geoffrey Hinton—uncover a major breakthrough in speech recognition (SR) that would later be integrated into Google’s SR technology. In 2011, IBM’s Watson won Jeopardy, beating champions, Ken Jennings and Brad Rutter. Element AI, founded by another of AI’s leading minds, Yoshua Bengio, was founded in 2016, putting AI at the forefront of solving business problems.

2017 was perhaps the most exciting year yet for Canadian AI: the federal government announced that they would be putting $950M towards developing superclusters of AI organizations and experts; NEXT Canada launched NextAI, the country’s first AI hub that was strictly concerned with the commercialization of artificial intelligence; Vector Institute launched out of Toronto’s MaRS Discovery District with Geoffrey Hinton as Chief Scientific Advisor; and Creative Destruction Lab launched the Quantum Machine Learning (QML) program, with the goal of creating more revenue-generating QML companies in Canada by 2022 than the rest of the world combined.

As we head in 2018, it’s clear that AI will lead the way in Canada’s technology sector; but with all of this innovation in artificial intelligence, machine learning and quantum computing, one thing remains constant: if mixed in just the right quantities, Kraft Dinner and Mr Noodles complement one another just fine.