Skip to main content

Machine Learning & Deep Learning

Machine Learning & Deep Learning


Understanding the latest advancements in artificial intelligence can seem overwhelming, but it really boils down to two concepts you’ve likely heard of before: machine learning and deep learning. These terms are often thrown around in ways that can make them seem like interchangeable buzzwords, hence why it’s important to understand the differences.
And those differences should be known! Examples of machine learning and deep learning are everywhere. It’s how Netflix knows which show you’ll want to watch next or how Facebook knows whose face is in a photo. And it’s how a customer service representative will know if you’ll be satisfied with their support before you even take a customer satisfaction (CSAT) survey.
So what are these concepts that dominate the conversations about artificial intelligence and how exactly are they different?

What is machine learning?


Here’s a basic definition of machine learning:
“Algorithms that parse data, learn from that data, and then apply what they’ve learned to make informed decisions”
An easy example of a machine learning algorithm is an on-demand music streaming service. For the service to make a decision about which new songs or artists to recommend to a listener, machine learning algorithms associate the listener’s preferences with other listeners who have similar musical taste.
Machine learning fuels all sorts of automated tasks and spans across multiple industries, from data security firms hunting down malware to finance professionals looking out for favorable trades. They’re designed to work like virtual personal assistants, and they work quite well.
Machine learning is a lot of complex math and coding that, at the end of day, serves a mechanical function the same way a flashlight, a car, or a television does. When something is capable of “machine learning”, it means it’s performing a function with the data given to it, and gets progressively better at that function. It’s like if you had a flashlight that turned on whenever you said “it’s dark”, so it would recognize different phrases containing the word “dark”.
Now, the way machines can learn new tricks gets really interesting (and exciting) when we start talking about deep learning.

Deep learning vs machine learning

In practical terms, deep learning is just a subset of machine learning. It technically is machine learning and functions in a similar way (hence why the terms are sometimes loosely interchanged), but its capabilities are different.
Basic machine learning models do become progressively better at whatever their function is, but they still some guidance. If an ML algorithm returns an inaccurate prediction, then an engineer needs to step in and make adjustments. But with a deep learning model, the algorithms can determine on their own if a prediction is accurate or not.
Let’s go back to the flashlight example: it could be programmed to turn on when it recognizes the audible cue of someone saying the word “dark”. Eventually, it could pick up any phrase containing that word. Now if the flashlight had a deep learning model, it could maybe figure out that it should turn on with the cues “I can’t see” or “the light switch won’t work”. A deep learning model is able to learn through its own method of computing – its own “brain”, if you will.

How does deep learning work?

A deep learning model is designed to continually analyze data with a logic structure similar to how a human would draw conclusions. To achieve this, deep learning uses a layered structure of algorithms called an artificial neural network (ANN). The design of an ANN is inspired by the biological neural network of the human brain. This makes for machine intelligence that’s far more capable than that of standard machine learning models.
It’s a tricky prospect to ensure that a deep learning model doesn’t draw incorrect conclusions (which is probably what keeps Elon up at night), but when it works as it’s intended to, functional deep learning is a scientific marvel and the potential backbone of true artificial intelligence.
A great example of deep learning is Google’s AlphaGo. Google created a computer program that learned to play the abstract board game called Go, a game known for requiring sharp intellect and intuition. By playing against professional Go players, AlphaGo’s deep learning model learned how to play at a level not seen before in artificial intelligence, and all without being told when it should made a specific move (as it would with a standard machine learning model). It caused quite a stir when AlphaGo defeated multiple world-renowned “masters” of the game; not only could a machine grasp the complex and abstract aspects of the game, it was becoming one of the greatest players of it as well.
To recap the differences between the two:
  • Machine learning uses algorithms to parse data, learn from that data, and make informed decisions based on what it has learned

  • Deep learning structures algorithms in layers to create an “artificial neural network” that can learn and make intelligent decisions on its own

  • Deep learning is a subfield of machine learning. While both fall under the broad category of artificial intelligence, deep learning is what powers the most human-like artificial intelligence

A simple explanation

We get it – all of this might still seem complicated. The easiest takeaway for understanding the difference between machine learning and deep learning is to know that deep learning is machine learning.
More specifically, it’s the next evolution of machine learning – it’s how machines can make their own accurate decisions without a programmer telling them so.

An analogy to be excited about

Another thing to be excited about with deep learning, and a key part in understanding why it’s becoming so popular, is that it’s powered by massive amounts of data. The “Big Data Era” of technology is providing huge amounts of opportunities for new innovations in deep learning. We’re bound to see things in the next 10 years that we can’t even fathom yet.
Andrew Ng, the chief scientist of China’s major search engine Baidu and one of the leaders of the Google Brain Project, shared a great analogy for deep learning with Wired Magazine: “I think AI is akin to building a rocket ship. You need a huge engine and a lot of fuel,” he told Wired journalist Caleb Garling. “If you have a large engine and a tiny amount of fuel, you won’t make it to orbit. If you have a tiny engine and a ton of fuel, you can’t even lift off. To build a rocket you need a huge engine and a lot of fuel.”
“The analogy to deep learning is that the rocket engine is the deep learning models and the fuel is the huge amounts of data we can feed to these algorithms.”
– Andrew Ng (source: Wired)

Comments

Popular posts from this blog

Let's Understand Ten Machine Learning Algorithms

Ten Machine Learning Algorithms to Learn Machine Learning Practitioners have different personalities. While some of them are “I am an expert in X and X can train on any type of data”, where X = some algorithm, some others are “Right tool for the right job people”. A lot of them also subscribe to “Jack of all trades. Master of one” strategy, where they have one area of deep expertise and know slightly about different fields of Machine Learning. That said, no one can deny the fact that as practicing Data Scientists, we will have to know basics of some common machine learning algorithms, which would help us engage with a new-domain problem we come across. This is a whirlwind tour of common machine learning algorithms and quick resources about them which can help you get started on them. 1. Principal Component Analysis(PCA)/SVD PCA is an unsupervised method to understand global properties of a dataset consisting of vectors. Covariance Matrix of data points is analyzed here to un...

gRPC with Java : Build Fast & Scalable Modern API & Microservices using Protocol Buffers

gRPC Java Master Class : Build Fast & Scalable Modern API for your Microservice using gRPC Protocol Buffers gRPC is a revolutionary and modern way to define and write APIs for your microservices. The days of REST, JSON and Swagger are over! Now writing an API is easy, simple, fast and efficient. gRPC is created by Google and Square, is an official CNCF project (like Docker and Kubernetes) and is now used by the biggest tech companies such as Netflix, CoreOS, CockRoachDB, and so on! gRPC is very popular and has over 15,000 stars on GitHub (2 times what Kafka has!). I am convinced that gRPC is the FUTURE for writing API for microservices so I want to give you a chance to learn about it TODAY. Amongst the advantage of gRPC: 1) All your APIs and messages are simply defined using Protocol Buffers 2) All your server and client code for any programming language gets generated automatically for free! Saves you hours of programming 3) Data is compact and serialised 4) API ...

What is Big Data ?

What is Big Data ? It is now time to answer an important question – What is Big Data? Big data, as defined by Wikipedia, is this: “Big data is a broad term for  data sets  so large or complex that traditional  data processing  applications are inadequate. Challenges include  analysis , capture,  data curation , search,  sharing ,  storage , transfer ,  visualization ,  querying  and  information privacy . The term often refers simply to the use of  predictive analytics  or certain other advanced methods to extract value from data, and seldom to a particular size of data set.” In simple terms, Big Data is data that has the 3 characteristics that we mentioned in the last section – • It is big – typically in terabytes or even petabytes • It is varied – it could be a traditional database, it could be video data, log data, text data or even voice data • It keeps increasing as new data keeps flowing in This kin...