Back to course
Next page
Interactive Quiz
Test your knowledge!
1
What is the main reason for using mini-batches in stochastic gradient descent?
A
To guarantee a decrease in loss at each iteration
B
To reduce memory consumption and speed up training on large datasets
C
To increase the size of the dataset
D
To avoid having to compute the gradient
2
In a neural network, what is the main role of a logistic regression neuron?
A
Compute a weighted sum of inputs without activation
B
Apply a nonlinear sigmoid activation function to produce an output between 0 and 1
C
Perform a convolution on the input data
D
Memorize previous states in recurrent networks
3
Which loss function is commonly used to maximize the margin between classes in a binary classification problem?
A
Mean Squared Error (MSE)
B
Cross-entropy
C
Hinge loss
D
Max-margin loss
4
In PyTorch, which class is used to create a dataset that groups inputs and labels?
A
DataLoader
B
TensorDataset
C
DatasetLoader
D
BatchSampler
5
When building a neural network in PyTorch, which function applies a fully connected layer?
A
nn.Conv2d
B
nn.Linear
C
nn.ReLU
D
nn.Dropout
6
What is the main purpose of L2 regularization in neural network training?
A
Encourage weights to become large to improve accuracy
B
Add a penalty proportional to the square of the weights to reduce overfitting
C
Randomly remove certain neurons from the network
D
Normalize the inputs of each layer
7
What role does the dropout technique play in training a neural network?
A
Normalize activations to speed up training
B
Randomly deactivate neurons to improve generalization
C
Reduce the size of the dataset
D
Increase the depth of the network
8
What operation does Batch Normalization perform on a mini-batch of activations?
A
It applies a nonlinear activation function
B
It normalizes activations to have zero mean and unit variance, then adjusts with learnable parameters
C
It randomly removes certain neurons
D
It increases the variance of activations for more robustness
9
Among the following advantages, which one is not a direct effect of Batch Normalization?
A
Speeding up training convergence
B
Reducing sensitivity to weight initialization
C
Completely eliminating overfitting
D
Reducing 'Internal Covariate Shift'
10
Which loss function is used in PyTorch for a multi-class classification problem like MNIST?
A
MSELoss
B
CrossEntropyLoss
C
Hinge Loss
D
Max-margin loss
Score: 0/10
Score: 0/10