Which of the following is not true about deep learning?
It is a subset of machine learning:
It is also known as supervised learning.
It does not require a huge set of training data.
It learns from mistakes.
The correct answer and explanation is :
Correct Answer: It does not require a huge set of training data.
Explanation:
Deep learning is a powerful subfield of machine learning that utilizes artificial neural networks to model and solve complex problems. Let’s evaluate the truthfulness of each statement:
1. “It is a subset of machine learning”:
✅ True.
Deep learning is indeed a subset of machine learning. While machine learning encompasses a wide range of techniques (e.g., decision trees, support vector machines, etc.), deep learning specifically refers to algorithms based on artificial neural networks, particularly those with multiple (deep) layers.
2. “It is also known as supervised learning”:
✅ Partially true.
While deep learning can be used in supervised learning, it is not exclusively supervised. Deep learning models can also be applied in unsupervised learning (e.g., autoencoders, GANs) and reinforcement learning. However, many of the most common applications of deep learning (like image classification and speech recognition) use supervised learning, which is why this statement is not entirely incorrect—but it’s overly broad.
3. “It does not require a huge set of training data”:
❌ False (Correct Answer).
This statement is not true. Deep learning models generally require large amounts of data to perform well. The reason is that these models have millions of parameters, which need to be adjusted during training. Without sufficient data, the model can overfit—performing well on training data but poorly on new, unseen data. High data volume allows the network to generalize better and learn more accurate representations.
4. “It learns from mistakes”:
✅ True.
This is a core concept in deep learning. During training, the model makes predictions, compares them to the actual results, and calculates a loss (or error). Using a process called backpropagation, the model adjusts its internal parameters to reduce this loss over time—essentially learning from its mistakes.
In summary, the incorrect statement is:
“It does not require a huge set of training data.”
Deep learning thrives on large datasets and often underperforms when data is limited.