The concept of intelligent beings has been around for a long time. The ancient Greeks, in fact, had myths about robots as the Chinese and Egyptian engineers built automatons. However, the beginnings of modern AI has been traced back to the time where classical philosophers’ attempted to describe human thinking as a symbolic system. Between the 1940s and 50s, a handful of scientists from various fields discussed the possibility of creating an artificial brain. This led to the rise of the field of AI research – which was founded as an academic discipline in 1956 – at a conference at Dartmouth College, in Hanover, New Hampshire. The word was coined by John McCarthy, who is now considered as father of Artificial Intelligence.
Despite a well-funded global effort over numerous decades, scientists found it extremely difficult to create intelligence in machines. Between the mid 1970s and 1990s, scientists had to deal with an acute shortage of funding for AI research. These years came to be known as the ‘AI Winters’. However, by the late 1990, American corporations once again were interested in AI. Furthermore, the Japanese government too, came up with plans to develop a fifth generation computer for the advancement of AI. Finally, In 1997, IBM’s Deep Blue defeated became the first computer to beat a world Chess champion, Garry Kasparov.
As AI and its technology continued to march – largely due to improvements in computer hardware, corporations and governments too began to successfully use its methods in other narrow domains. The last 15 years, Amazon, Google, Baidu, and many others, have managed to leverage AI technology to a huge commercial advantage. AI, today, is embedded in many of the online services we use. As a result, the technology has managed to not only play a role in every sector, but also drive a large part of the stock market too.