Artificial intelligence is fast encroaching into every area of our digital lives, picking the social media stories we see, identifying our friends and pets in photos, and even making sure we avoid accidents on the road. If you want to understand AI though, you need to start with the terms underpinning it.
And so we present to you the TechRadar glossary of AI: five of the key words and phrases you'll want to know to get a hold on this ever-improving tech and to keep up your end of the conversation the next time the topic crops up around the dinner table.
First though a disclaimer – not everyone agrees on the exact definition of some of these words, so you might see them used differently elsewhere on the web. Wherever possible we've tried to stick to the most commonly used definitions, but with such a fast-growing and new technology, there are always going to be discrepancies.
1. Algorithms
Ah, the famous (or infamous) algorithm. Algorithms are sets of rules that computers can follow, so if one of your best friends posts a photo of you on Facebook, then the rules say that should go up at the top of your News Feed. Or if you need to get from A to B on Google Maps, an algorithm can help you work out the fastest route.
The rules are followed by computers but usually set by humans – so it's the Facebook engineers who choose what makes a story important or which roads are fastest. Where AI starts to come in is in tweaking these algorithms using machine learning, so the computer begin to adapt these rules for themselves. Google Maps might do this if it starts getting feedback data that a particular road is shut.
When image recognition systems get it wrong, for example, that's an example of an algorithm or set of rules at work – the same rules have been applied but the wrong result has been reached, so you get a cat-like dog rather than an actual cat. In many ways, algorithms are the building blocks of machine learning (see below).
2. Artificial intelligence
Just what is artificial intelligence anyway? Definitions differ depending on who you ask, but in the broadest sense it's any kind of intelligence that has been artificially created. Obviously.
So when Siri replies to you like a real human being, that's artificial intelligence. And when Google Photos seems to know what a cat looks like, that's artificial intelligence too. And Anthony Daniels hiding inside his C-3PO suit is artificial intelligence as well, in a way – the illusion of a talking, thinking robot which is actually controlled by a human.
The definition really is that wide, so you can see why there's often confusion about how it should be applied. There are many different types of and approaches to AI, so make sure you understand the differences – when something is described as having AI built-in, that could mean a wide range of technologies are involved.
3. Deep learning
Deep learning is a type or a subset of machine learning (see below), which is why the two terms often get jumbled up, and can correctly be used to describe the same AI in a lot of cases. It's machine learning but designed to be even more intelligent, with more nuance and more layers, and intended to work more like the human brain does.
Deep learning has been made possible by two key technological advances: more data and more powerful hardware. That's why it's only recently come into fashion, though its original roots go back decades. If you think about it as machine learning turned up to 11, you can understand why it's getting smarter as computers get more powerful.
Deep learning often makes use of neural networks (see below) to add this extra layer of intelligence. For example, both deep learning and machine learning can recognise a cat in a picture by scanning a million cat images – but whereas machine learning needs to be told what features make up a cat, deep learning can work out what a cat looks like for itself, as long as there's enough raw data to work from.
4. Machine learning
Programming software and hardware to do our bidding is all well and good, but machine learning is the next stage, and it's exactly what it sounds like. It's the machines learning for themselves, rather than having everything specifically spelled out for them each time.
One of the best-known examples is with image recognition. Give a machine learning system enough pictures of a cat, and it will eventually be able to spot a cat in a new picture by itself, without any hints from a human operator. You can think of it as AI networks going beyond their original programming, having first been trained on reams of data.
Google's AlphaGo program is another good example: taught by humans but able to make decisions of its own based on its training. What AlphaGo also shows is that many types of AI are very specific – that engine is fantastic at playing Go, but would be next to useless in a self-driving car.
5. Neural networks
Closely tied to the idea of deep learning (see above), neural networks attempt to mimic the processes of the human brain, or as much of the human brain as we understand at this point. Again, the development of neural networks has only really been possible in the last few years with high-end processors.
Essentially it means lots and lots of layers. Rather than looking at an image and deciding whether it's a cat image – for example – the neural network considers various different characteristics of the image and cats, assigning different levels of importance to each of them, before making a final decision. The end result is a cat recognition engine that's much more accurate (hence why image recognition has got much better in recent years).
If you can't completely grasp the idea, don't worry – neural networks aren't a concept you can fully understand from a brief three-paragraph definition. But if you think of it as another machine learning tool, designed to create some of the subtleties of human intelligence, then you've got the basics.
TechRadar's AI Week is brought to you in association with Honor.
January 12, 2018 at 04:00PM
David Nield
Tidak ada komentar:
Posting Komentar