There are three main key developments that have happened for AI during computer research.

Cheap parallel computation is one of the key breakthroughs that has helped artificial intelligence move forward. Artificial intelligence in order to function properly needs a neural network. Each node of the neural network is similar to a neuron in a human brain. This means that the artificial intelligence software must be able to process different information it receives simultaneously and combine it with the others processed information to fully understand what signals it is receiving. For instance when looking at an image you need to see each pixel in relation to the other pixels around, or even when listening to a sentence you need to place each word in the correct order to properly communicate the context in the sentence. Despite these high demands computer processors could only analyze one thing at a time until about a decade ago when new technologies were released called graphics processing units (GPU). This technology was initially intended for video games. These where very useful for high graphics video games in which pixels millions of pixels needed to be recalculated every second. This new technology was in installed on almost every PC motherboard. In 2005, GPU had become very popular and were being produced in large quantities. This also meant that they became cheaper to purchase. Andrew Ng and a team at Stanford University in 2009 realized that GPU’s could also run a parallel neural network. This allowed the software to make hundred of millions of connection between their nodes. Previously, the regular processor could require several weeks to affectively process information and now a cluster of GPU’s can accomplish the same task in less than a day. Today GPU’s are still being used to power artificial intelligence programs and companies like Facebook and Netflix are using artificial intelligence with GPU’s to find recommendation for all their users.

4

Another major key development in artificial intelligence is big data. Scientist started using the digital universe to teach AI programs new information. All AI programs need to be taught. They need to see a dozen images of cats and dogs before properly distinguishing and categorizing the two. It must also play many games of chess before it can fully understand the game and become an expert. This is very similar to humans. Allowing the Internet to teach the AI program new facts helped speed up the artificial intelligences learning process.

6

The last major key development was simply better algorithms. Digital neural nets where invented in 1950s but scientists later realized that neural nets needed to be organized in stacked layers. To simply recognize a human face there needed to be 15 levels of stacked neural nets to go through patterns. In 2006, Geoff Hinton at the University of Toronto, made alterations to this method, he called it the “deep learning”. Hinton was able to mathematically optimize results so that learning accumulated faster. Nowadays, all AI system are required to have these new algorithms in order to be successful. These new algorithms are found in many artificial intelligence systems, Google’s search engine and Facebook algorithms.

5

Artificial intelligence programs are becoming smarter and smarter each day. These three key developments have made AI systems better overnight. AI will just continue to improve and start integrating itself into our everyday life.