Artificial intelligence is a vast, complex and potentially confusing subject. Since we believe it has the potential to transform ELT (and many other aspects of life too, of course), we thought it would be useful to start setting out what AI actually is, and demystifying some of the terminology. It’s a topic we plan to delve into more deeply during 2017, looking at the pros and cons, seeking out and analysing specific examples in the field of languages and sharing our thoughts on what this all means for ELT. But for now, we hope this acts as useful starting point in simply understanding what AI is and how it works.

What is AI?

AI is a branch of computer science whose goal is to develop computer systems capable of behaving like humans, or of doing tasks that normally require human intelligence

Sundar Pichai

Sundar Pichai

It’s been around since the 1940s but, beyond science fiction, has only really started to enter the public consciousness over the last 5 years or so. It now feels like we are at something of a tipping point, with all of the ‘big 5’ tech giants (Google, Apple, Facebook, Amazon and Microsoft) and China’s Baidu investing massively, in the belief that AI represents their futures. While ELT publishers are still trying to get to “digital first”, Google is now “AI first”, according to CEO Sundar Pichai.

And if something is going to be the core activity of those corporations, it’s almost certainly going to be a major part of our lives whether we like it or not. If you’ve previously laughed at useless answers from Siri or gobbledegook translations from Google Translate, then you need to know that things are changing fast.

AI is often described as the next seismic change in a sequence that began with the industrial revolution, and was then followed by the information revolution. In this excellent talk, Benedict Evans puts forward the idea that as the industrial revolution automated muscle power (replacing human and animal physical strength), so the AI revolution will automate brain power (replacing human cognitive effort).

Benedict Evans on machine learning

Benedict Evans on machine learning

 

Some of the most widely-publicised applications of AI recently have been self-driving cars, medical diagnosis and, closer to home, translation. Speech recognition and machine translation are seeping into everyday life.

Promised land or dystopia?

Robot overlord

https://www.flickr.com/photos/hisgett/

The optimistic view is that intelligent computers will have a massively positive effect on human life. For example, it’s widely believed that the widespread use of AI to handle repetitive tasks will result in humans focussing on tasks that require creative and innovative thinking. We will no longer need people to work as road sweepers, factory workers, or even to carry out many of the more mundane white collar jobs. All of the human creative and productive capacity freed up can then be redirected towards more interesting and rewarding things. Maybe we could even end up with no-one having to work full-time any more (that seems like the longest of long shots…). On a mundane level, AI is likely to be adding a layer of convenience and ease to our everyday lives in a deeper and more pervasive way than internet connected mobile devices have already done for us – for example by transforming how we interact with computers. Why use a mouse or touchscreen when you can just talk to it? Why tell it what to do, when it can predict what you might need and make sensible suggestions?

The dystopian counter-argument is that AI is going to automate huge swathes of society out of jobs, concentrating wealth and power even more in the hands of the 1% (or the 0.1%). Beyond that, of course, there is the science fiction nightmare of cyborg or robot overlords enslaving the very human race that created them.

Ultimately, AI is nothing more than computer programs, so everything depends on who is doing the programming and what their intentions are. At the moment, the technology is nowhere near human levels of intelligence, and estimates vary between ‘decades’ and ‘never’ as to when that might be possible, so it’s probably going to be a while before we find ourselves in the Matrix.

Machine learning

Machine learning is a type of AI which allows computers to learn to do things we haven’t explicitly programmed them to do

That is, software which can change and evolve in response to the data it is exposed to, drawing inferences from data sets such as image libraries, language corpora, medical scans etc. This is achieved through the software identifying patterns in data and updating its understanding accordingly. These patterns may not even be discernible to humans. The approach relies on large data sets and a process of ‘training’ the system – i.e. verifying that the program really is identifying a pattern correctly.

A cat

A cat

For example, an image recognition system is tasked with recognising pictures of cats. It is presented with images, some of which contain cats and some don’t. Every time it correctly identifies a cat, it updates its understanding of what ‘cat’ is, and therefore increases the likelihood of correct identification in future.

Facebook’s News Feed uses machine learning to decide what to show you, based not only on what you click (‘Likes’ etc), but also on factors such as your scrolling behaviour – if you stop scrolling when a particular type of content is visible, it deduces that you are probably reading it, and that ‘preference’ is added to your data model in order to feed future inferences.

This approach is in contrast with symbolic AI, which works by providing a predetermined ‘map’ or set of rules to the system as a model, so that it can then compare what it sees in the data with the model. This requires every possible variation to be taught to the system, which is hard to scale. For something as complex and ambiguous as language, it becomes pretty much impossible.

Neural networks

Neural networks attempt to mimic how the human brain works though a vast network of connected virtual ‘neurons’

Neural networkNeural networks power many of the recent major advances in AI. Each connected ‘neuron’ acts as a decision-maker. As the system is trained through exposure to data, connections between neurons are strengthened, building patterns of ‘knowledge’ which it can then apply to future incoming data. In the example of recognising cat pictures, every time an image of a cat is presented, those neurons which correctly identified it are ‘strengthened’ meaning their decisions are given a higher weighting in future. Over time, this creates a pattern of strong connections which is optimised for recognising cats.

 

Deep learning

Deep learning describes a multi-layered neural network, with each layer being more abstract than the one below it

This allows for more complex AI – for example, the ability to understand complex human language. Lower level neural networks could focus on identifying individual phonemes; those decisions are then passed up to the next layer which makes decisions on what the words are; then upwards to phrases, sentences, semantic meaning etc.

Natural Language Processing (NLP)

NLP is the component of AI concerned with understanding human language

It applies the principles of machine learning to processing language (both written and spoken) and to both understanding and generating language. It typically builds up in layers from lexical analysis to syntactical analysis to semantic analysis to discourse analysis to pragmatic analysis. And with that in mind, hopefully it’s obvious how the concept of deep learning can apply to NLP.

Automated translation and speech recognition

Applying the principles of NLP and using neural networks, it’s becoming ever more feasible to produce acceptable translations of written and spoken language. If you haven’t used Google Translate for a while, give it a go – for certain language pairs (e.g. English–French, English–Spanish and English–German) it’s really quite impressive. Here’s an example from todays edition of El Pais:

Two hours and forty minutes. That’s the time it takes Pokémon Go to fully deplete the battery of a 100% charged mobile phone, according to tests performed by Avast Software with a Samsung Galaxy S6. The voracity of this game should not surprise anyone, since it combines several utilities simultaneously in order to enrich each game: augmented reality, 3D graphics, GPS location, camera, speakers … But the surprising thing is that, even so, this adventure Graphic is not among the ten mobile applications that more battery, storage space and data rate consume each time the user uses them. Specifically, according to the recent report from Avast Software, based on data collected between July and September 2016 from more than three million Android devices worldwide, the list of the top ten apps that mobiles and tablets Devour is as follows.

Not 100% perfect, but easy enough to understand and, in parts, actually pretty good. Here’s another example, from Die Welt:

Die Welt

Spoken language is harder, but you’ve probably seen promo videos for Skype Translate which look very impressive. The reality is less slick, but we should expect rapid improvement. With the full weight of the likes of Google, Microsoft and Facebook behind it (and their access to vast amounts of data and processing power), automated translation will soon be the real deal. And that’s something that ELT really can’t afford to ignore.

Further reading

If you want to delve further into AI, we’ve put together a list of articles and posts, which will be in this week’s ‘Weekly Reads’ newsletter – if you’re not already subscribed to our mailing list, just fill in the form below.

Cover image from https://www.flickr.com/photos/abelmon/
Neural network image from https://www.flickr.com/photos/87780147@N04/
Cat from https://www.flickr.com/photos/wapiko57/

PS – I am far from an expert in AI, so if you are, I’d genuinely welcome any corrections or additions

Join our mailing list

Get new ELTjam posts & updates straight to your inbox

Powered by ConvertKit