Artificial Intelligence: a brief history [Recap - Fin & Tonic on AI & Chatbots]

For years, artificial intelligence has remained confined to laboratories. For many, those words evoked apocalyptic sci-fi movies over ultimate battles between machines and humans.

Today, without even sometimes being aware of it, AI has burst into our daily lives. You can see it in your house, through your bank account or even on your smartphone. Nowadays, Apple’s personal assistant, Siri, has more than 41.4 monthly active users. And this is only in the US!

In light of the daily press articles on the subject, the opportunities linked to AI amaze but also scare some people by the changes and the impact it will bring to the world as we know it today. We were able to see ourselves that the subject is of interest during our recent Fin & Tonic—held last week, the event was sold out with more than 100 people participating. The evening comprised of many discussions and meetings and highlighted the new advances of different companies in the field of AI and chatbots.

We had the opportunity to hear a fantastic keynote speaker, Sorin Cheran of Hewlett Packard Enterprise, and his views on AI, as well as his views on where the technology is headed—he predicted that labor would be fully automated by the year 2140, which isn’t that far away!

 

The phenomenon

But what does the term actually encompass? Artificial intelligence is a subfield of computer science that emphasizes the creation of intelligent machines such as computers or software that work and react like humans. The basic postulate of AI technology is its ability to continually learn from the data it collects without humans providing prescriptive instructions for how to do so. The more data the machine gathers and analyzes through its algorithms, the better the engine can make predictions.

Some of the activities computers with artificial intelligence are designed for include speech recognition, learning and problem solving, like humans.

 

An exponential pace

The idea of AI has been germinating for years as proven by many myths and legends. As science and mathematics evolve, theorems on the subject have begun to emerge.

Nevertheless, it is in the twentieth century that the story accelerated. An emblematic figure of the beginning of the reflection on AI is the famous Alan Turing. After WWII and the scientific breakthroughs it brought, the mathematician wrote a paper on the notion of machines being able to simulate human beings and the ability to do intelligent things. No one can refute a computer’s ability to process logic but can a machine think? Turing developed a simple test, today called the Turing Test, to know if a machine can actually think like a human. Although his experience now raises questions about its relevance, it had the merit of starting a reflection on what it means to think.

Actually, it is really in 1956 that the domain received its name: artificial intelligence. John McCarthy, an American Computer Scientist, first coined the term when he held an academic conference on the subject. He defines AI as “the science and engineering of making intelligent machines, especially intelligent computer programs.” At that time, one began to be very excited by imagining and discovering the almost unlimited possibilities of the field.

The years passed and with them the reflections and advances in the field of AI. Its story continues with ups and downs and iconic moments such as the birth of Shakey the Robot, the first general-purpose mobile robot.

One of the turning points is the discovery of a commercial value and therefore the attraction of investors. Last year, PwC estimated that by 2030, AI could boost the global economy by $15.7 trillion through a productivity improvement and an increase in consumption.

In addition to its monetary impact, its impact on scientific research continued to grow. In 1997, the IBM supercomputer Deep Blue achieved the historic feat of beating the world chess champion, Garry Kasparov. Theoretically, the computer was far superior to the chess player, capable of evaluating up to 200 million positions a second, but the question persisted whether he could think strategically. As Deep Blue won the contest in such way that Kasparov believed a human being had to be behind, the answer was undoubtedly, “yes.”

In 2008, Apple revealed a new small feature on the iPhone with speech recognition, Siri. This little feature was, in fact, a major breakthrough with backstage consisting of “thousands of powerful computers, running parallel neural networks, learning to spot patterns in the vast volumes of data streaming in.” Although initially fairly inaccurate, the software claimed 92% accuracy in 2015 thanks to years of learning and improvements.

Recently, IBM's Watson made a big step by taking on the human brain on the American quiz show Jeopardy. It was a real feat because Watson had to answer riddles and complex questions using countless AI techniques, including neural networks.

This summary of the latest changes in the field is just a snapshot of the story. Indeed, the precursors and scientists who have contributed to the field are numerous, Turing or IBM are only some of the key players who shaped and imagined this technology, long considered as "of the future" which is now becoming "of our present." The development of big data and the increase in speed of communication and size of the data collected are some of the reasons why AI has gained prominence recently. To quote Elon Musk, CEO of Tesla, “the pace of progress in artificial intelligence…is incredibly fast…it is growing at a pace close to exponential."

  

Infinite applications

Finance, music, strategy, science…the applications of AI are endless.

As for the world of business and finance specifically, discoveries and improvements continue to grow at a frantic pace. Given its high volume, accurate historical records and the quantitative nature of the finance world, the domain is perfectly suited for AI. Therefore, companies are hoping that new tools could save time, cut costs and boost revenues. Highly repetitive tasks performed by humans can be accomplished in a faster way by process automation.

Some of the recent applications in the banking and financial domains are fraud prevention, risk management, insurance underwriting and algorithmic trading.

One promising area is an improvement in consumer service through, among other things, chatbots, as we discussed last week at our Fin & Tonic. Chatbots help financial services and tech companies serve customers more efficiently by providing immediate service to customers.

We were able to learn more about some use cases on chatbots from Ingenico ePayments and Infinity Mobile, as well as a talk on the state of chatbots from In the Pocket. Not only that, but we also learned about a use case on modeling risk using AI from Yields.io.

The speaking portion of the evening concluded with a panel, where the experts had a chance for an open discussion around AI and their experiences with the technology.

All in all, it was an incredibly insightful evening, and the fully packed audience was engaged from beginning to end!

 

What’s next?

The future of AI is thrilling, to say the least, with its promise of endless possibilities. However, many researchers and influential people such as Elon Musk, Stephen Hawking and Nick Bostrom have issued concerns for the rise of AI without a framework. They’ve called for more ethical control over the development of the discipline.

Moreover, the automation of tasks in the industry begs a lot of discussion. Indeed, seen by some as a social disaster for employment, others see it as an opportunity to allow workers to focus on more rewarding and interesting tasks. The use of AI is inherently beneficial, but needs to be designed in a sensible and thoughtful way. The goal is to work hand-in-hand with humans to improve the efficiency and success of business tasks, rather than fully replacing humans and automating everything at all costs.       
 

Hope to see you at our next Fin & Tonic on GDPR! Interested in our upcoming events?