HW2: SSL to the Moon
Last modified: 2024-10-07 00:23
Your ZIP file should include
- All starter code files (.py and .ipynb) with your edits (in the top-level directory)
Your PDF should include (in order):
- Your full name
- Collaboration statement
- Problem 1: figures 1a and 1b
- Problem 2: figures 2a, 2b, and 2c; short answer for 2d
- Problem 3: figures 3a, 3b, and 3c (with captions)
- Problem 4: table 4a and code listing 4b
Please use provided LaTeX Template: https://github.com/tufts-ml-courses/cs152l3d-24f-assignments/blob/main/hw2/hw2_template.tex
Questions?
- First look at the HW2 FAQ post on Piazza
- Then, post a new question to Piazza, using
hw2
topic
Jump to: Starter Code Problem 1 Problem 2 Problem 3 Problem 4
Updates to Instructions:
Goals
We spent Weeks 3 and 4 learning all about self- and semi-supervised learning.
In this HW2, you'll implement a common method for each style of SSL, (self- and semi-), and then evaluate your implementation on a toy dataset.
Problem 1: Establish a baseline for supervised training on labeled-set-only.
Problem 2: Can we gain value from pseudo-labeling?
Problem 3: Can we gain value from SimCLR?
Background
To read up on Pseudo-labeling for Problem 2, see our fact sheet.
To read up on SimCLR for Problem 3, see our fact sheet.
Problem Setup
Starter Code and Provided Data
You can find the starter code and our provided "two half moons" dataset in the course Github repository here:
https://github.com/tufts-ml-courses/cs152l3d-24f-assignments/tree/main/hw2/
Datasets
Run the top cells of hw2.ipynb, which load and plot the data we'll classify.
This is a variant of the common "two half moons" dataset that is widely used to illustrate various binary classification tasks, especially in semi-supervised.
Classifier architecture
Skim MLPClassifier.py, which defines the very simple core architecture we will use:
- input for each example is a 2-dim array/tensor
- hidden layer with 32 units followed by ReLu
- hidden layer with 2 units followed by ReLu
- hidden layer with 2 units followed by L2normalization
- output layer that produces a 2-dim array/tensor of logits
We could get predicted probabilities for the two classes by applying softmax
to the logits produced by the network.
Note that the last hidden layer produces an "embedding" of the input feature vector on the unit circle in 2 dimensions. This is a special case of the common practice in representation learning of embedding on a unit hyperspheres in a desired number of dimensions (often 500+).
Problem 1
Worth 15 out of 100 points
Tasks for Code Implementation
Target time for these coding steps: 20 min.
Skim train_super.py, which defines a function for performing training. This is very similar to what we used in HW1. There's nothing you need to implement here, but focus on understanding what this function does.
Tasks for Experiment Execution
Target time for all these steps: 30 min.
Step through hw2.ipynb to achieve the following.
EXPERIMENT 1(a): Explore settings of lr
, n_epochs
and seed
to find a setting that works well (low validation set xent) on the bigger labeled set version of half-moons (see hw2.ipynb). We have plenty of data here, so keep l2pen_mag = 0.0
. This step is so we know that a solid solution to this data is achievable, if we had enough data.
EXPERIMENT 1(b): Explore settings of lr
, n_epochs
and seed
to find a setting that works well (low validation set xent) on the smaller version of half-moons (see hw2.ipynb). Please set l2pen_mag = 2.0
, to avoid overly confident predicted probabilities. This smaller version is what we'll try to improve throughout Problems 2 and 3 below.
Tasks for Report
Figure 1(a): Show the two-panel visualization (trace plot of losses, decision boundary) representing the best run from Experiment 1(a). No caption necessary here.
Figure 1(b): Show the two-panel visualization (trace plot of losses, decision boundary) representing the best run from Experiment 1(b). No caption necessary here.
Problem 2: Semi-supervised learning via Curriculum Pseudo-labeling
Worth 30 out of 100 points
Tasks for Code Implementation
Target time for these steps: 30 min.
Inside data_utils_pseudolabel.py
, make the following edits
CODE 2(i) Edit make_pseudolabels_for_most_confident_fraction
so that, given a trained model and a unlabeled dataset stored in tensor xu_ND
(shape N x D), you are computing phat_N
and yhat_N
, tensors indicating the maximum predicted probability and the corresponding class label for each of the N=2048
examples in the unlabeled set.
Also read through make_expanded_data_loaders
so you understand how we are merging the original train set with this new pseudolabel dataset to make new data loaders. But no edits necessary.
Tasks for Experiment Execution
Target time for these steps: 2 hrs.
Step through hw2.ipynb to achieve the following.
EXPERIMENT 2(a): For the best model from 1b (of the smaller dataset), use your code to make pictures of the pseudolabels obtained by thresholding at 0.9, 0.5, and 0.1 quantiles of predicted probabilities of each class.
EXPERIMENT 2(b): Perform PHASE ONE of curriculum pseudo-labeling. Build a dataset using the model from 1b, with quantile threshold set to 0.5 (keeping the most confident 50% of all unlabeled data). Run train_super
with suitable settings of seed, lr, n_epochs
so you see reasonable convergence and low validation-set xent. Set l2pen_mag=200
because even though we have lots of "data", we want to avoid overconfident predicted probabilities.
EXPERIMENT 2(c): Perform PHASE TWO of curriculum pseudo-labeling. Build a dataset using the phase1 model from 2b, with quantile threshold set to 0.25 (keeping the most confident 75% of all unlabeled data). Run train_super
with suitable settings of seed, lr, n_epochs
so you see reasonable convergence and low validation-set xent. Again, set l2pen_mag=200
because even though we have lots of "data", we want to avoid overconfident predicted probabilities.
Tasks for Report
Figure 2(a) Show the dataset visualization figure resulting from experiment 2a. The purpose is to sanity check your pseudolabel dataset construction. No caption necessary.
Figure 2(b) Show two-panel visualization (trace plot of losses, decision boundary) from the best training run of Experiment 2(b). No caption necessary.
Figure 2(c) Show two-panel visualization (trace plot of losses, decision boundary) from the best training run of Experiment 2(c). No caption necessary.
Short answer 2(d) Reflect on the similarities and differences between the decision boundaries seen here in Problem 2 (with pseudolabels), and the earlier boundary in Fig 1b (which only used the labeled set). Try to align what you see with conceptual knowledge of how pseudolabeling works. Hint: How do your results comparing supervised and psuedo-label semi-supervised learning contrast with Fig. 1 of the Ouali et al. paper we read?
Problem 3: Self-supervised learning via SimCLR
Worth 45 out of 100 points
Tasks for Code Implementation
Target time for these steps: 3 hr.
In losses_simclr.py, complete the following tasks:
- Task 3(i): Complete the todos in
calc_self_loss_for_batch
- Task 3(ii): Implement
calc_simclr_loss__forloop
- Task 3(iii): (optional, recommended for speed) Implement
calc_simclr_loss__fast
That last task is optional, focusing on how to speed up Python/PyTorch code via using built-in functions that are vectorized, avoiding any for loops. You can skip this step, but note your later experiments might be much slower, with training for 50 epochs taking several minutes rather than a few seconds.
Next, skim train_self.py to see how we perform training with a self-supervised objective. You don't need to edit anything here, but you should understand how it works.
Tasks for Experiment Execution
Target time for these steps: 2 hr.
Step through the Problem 3 section of hw2.ipynb to achieve the following.
EXPERIMENT 3(a) : Using your SimCLR loss implementation and the provided train_self
module, fit an MLP encoder to the unlabeled half-moons dataset (all U=2048 examples). You'll want to explore lr
, n_epochs
, and seed
to find a setting that works well (low training set loss, we're looking for values less than 4.0 after convergence).
EXPERIMENT 3(b) : Visualize the learned representations for your best SimCLR model from 3(a), and compare these to the best supervised model from 1b (using the smaller data) and from 1a (bigger data). Use provided code in hw2.ipynb.
EXPERIMENT 3(c) : Freeze the learned representations from 3a, via a call to set_trainable_layers
. Next, call train_super.train_model
to train a linear output "classifier head" to classify on the provided tr_loader
and va_loader
. Keep l2pen_mag=0.0
, and find reasonable values of other settings (lr, n_epochs, seed).
Tasks for Report
In your submitted report, include:
Figure 3a with caption : Plot the loss over time for your best SimCLR runs. In the caption, describe your strategy for (1) selecting a seed and (2) selecting a learning rate.
Figure 3b with caption : Show side-by-side the unit-circle embeddings for the test set from (1) your best SimCLR model from 3a, (2) the best supervised model from 1b, and (3) the best supervised model from 1a, using code provided in hw2.ipynb. In the caption, please reflect on the differences between the panels. What about how each method is trained leads to the embeddings you see?
Figure 3c with caption: Show the two-panel visualization (loss traces on left, decision boundary on right) for your fine-tuned SimCLR classifier from 3c.
Problem 4: Head-to-head comparison
Worth 5 out of 100 points
Tasks for experiment execution
In hw2.ipynb, compute and report the test-set cross-entropy (base2) and accuracy for each of
- best supervised model from 1(b)
- best semi-supervised model from 2(c)
- best self-supervised model plus fine-tuning from 3(c)
Tasks for Report
In your submitted report, include:
Table 4(a): Report the final cross entropy and accuracy of each model on the test set. No caption is necessary.
Short answer 4(b) : Include the code for your best implementation of calc_simclr_loss
(either forloop or fast versions). Use the provided style in hw2_template.tex.
Credits
Problem 3 on SimCLR adapted in part from
Homework 4 of CS 1678/2078 at Pitt: https://people.cs.pitt.edu/~kovashka/cs1678_sp24/hw4.html
Q4 of Homework 3 of CS 231n at Stanford: https://cs231n.github.io/assignments2024/assignment3/#q4-self-supervised-learning-for-image-classification
Philip Lippe's tutorial notebook on SimCLR: https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial17/SimCLR.html