1. Homepage
  2. Coding
  3. CS152 L3D Learning from Limited Labeled Data - HW2: SSL to the Moon

CS152 L3D Learning from Limited Labeled Data - HW2: SSL to the Moon

Chat with a Specialist
TuftsCS152Learning from Limited Labeled DataSSL to the MoonPythonMachine Learningsupervised trainingpseudo-labelingSimCLR

HW2: SSL to the Moon


Last modified: 2024-10-07 00:23 Assignment Writing Service

    Your ZIP file should include Assignment Writing Service

    • All starter code files (.py and .ipynb) with your edits (in the top-level directory)

    Your PDF should include (in order): Assignment Writing Service

    • Your full name
    • Collaboration statement
    • Problem 1: figures 1a and 1b
    • Problem 2: figures 2a, 2b, and 2c; short answer for 2d
    • Problem 3: figures 3a, 3b, and 3c (with captions)
    • Problem 4: table 4a and code listing 4b

    Please use provided LaTeX Template: https://github.com/tufts-ml-courses/cs152l3d-24f-assignments/blob/main/hw2/hw2_template.tex Assignment Writing Service

    Questions? Assignment Writing Service

    Jump to: Starter Code   Problem 1   Problem 2   Problem 3   Problem 4 Assignment Writing Service

    Updates to Instructions: Assignment Writing Service

    Goals

    We spent Weeks 3 and 4 learning all about self- and semi-supervised learning. Assignment Writing Service

    In this HW2, you'll implement a common method for each style of SSL, (self- and semi-), and then evaluate your implementation on a toy dataset. Assignment Writing Service

    Background

    To read up on Pseudo-labeling for Problem 2, see our fact sheet. Assignment Writing Service

    To read up on SimCLR for Problem 3, see our fact sheet. Assignment Writing Service

    Problem Setup

    Starter Code and Provided Data

    You can find the starter code and our provided "two half moons" dataset in the course Github repository here: Assignment Writing Service

    https://github.com/tufts-ml-courses/cs152l3d-24f-assignments/tree/main/hw2/ Assignment Writing Service

    Datasets

    Run the top cells of hw2.ipynb, which load and plot the data we'll classify. Assignment Writing Service

    Two Half Moons dataset

    Two Half Moons toy dataset. What can we learn from only 5 red and 5 blue points in training (and 5 more of each in validation)? Can more unlabeled data help? Assignment Writing Service

    This is a variant of the common "two half moons" dataset that is widely used to illustrate various binary classification tasks, especially in semi-supervised. Assignment Writing Service

    Classifier architecture

    Skim MLPClassifier.py, which defines the very simple core architecture we will use: Assignment Writing Service

    • input for each example is a 2-dim array/tensor
    • hidden layer with 32 units followed by ReLu
    • hidden layer with 2 units followed by ReLu
    • hidden layer with 2 units followed by L2normalization
    • output layer that produces a 2-dim array/tensor of logits

    We could get predicted probabilities for the two classes by applying softmax to the logits produced by the network. Assignment Writing Service

    Note that the last hidden layer produces an "embedding" of the input feature vector on the unit circle in 2 dimensions. This is a special case of the common practice in representation learning of embedding on a unit hyperspheres in a desired number of dimensions (often 500+). Assignment Writing Service

    Problem 1

    Worth 15 out of 100 points Assignment Writing Service

    Tasks for Code Implementation

    Target time for these coding steps: 20 min. Assignment Writing Service

    Skim train_super.py, which defines a function for performing training. This is very similar to what we used in HW1. There's nothing you need to implement here, but focus on understanding what this function does. Assignment Writing Service

    Tasks for Experiment Execution

    Target time for all these steps: 30 min. Assignment Writing Service

    Step through hw2.ipynb to achieve the following. Assignment Writing Service

    EXPERIMENT 1(a): Explore settings of lrn_epochs and seed to find a setting that works well (low validation set xent) on the bigger labeled set version of half-moons (see hw2.ipynb). We have plenty of data here, so keep l2pen_mag = 0.0. This step is so we know that a solid solution to this data is achievable, if we had enough data. Assignment Writing Service

    EXPERIMENT 1(b): Explore settings of lrn_epochs and seed to find a setting that works well (low validation set xent) on the smaller version of half-moons (see hw2.ipynb). Please set l2pen_mag = 2.0, to avoid overly confident predicted probabilities. This smaller version is what we'll try to improve throughout Problems 2 and 3 below. Assignment Writing Service

    Tasks for Report

    Figure 1(a): Show the two-panel visualization (trace plot of losses, decision boundary) representing the best run from Experiment 1(a). No caption necessary here. Assignment Writing Service

    Figure 1(b): Show the two-panel visualization (trace plot of losses, decision boundary) representing the best run from Experiment 1(b). No caption necessary here. Assignment Writing Service

    Problem 2: Semi-supervised learning via Curriculum Pseudo-labeling

    Worth 30 out of 100 points Assignment Writing Service

    Tasks for Code Implementation

    Target time for these steps: 30 min. Assignment Writing Service

    Inside data_utils_pseudolabel.py, make the following edits Assignment Writing Service

    CODE 2(i) Edit make_pseudolabels_for_most_confident_fraction so that, given a trained model and a unlabeled dataset stored in tensor xu_ND (shape N x D), you are computing phat_N and yhat_N, tensors indicating the maximum predicted probability and the corresponding class label for each of the N=2048 examples in the unlabeled set. Assignment Writing Service

    Also read through make_expanded_data_loaders so you understand how we are merging the original train set with this new pseudolabel dataset to make new data loaders. But no edits necessary. Assignment Writing Service

    Tasks for Experiment Execution

    Target time for these steps: 2 hrs. Assignment Writing Service

    Step through hw2.ipynb to achieve the following. Assignment Writing Service

    EXPERIMENT 2(a): For the best model from 1b (of the smaller dataset), use your code to make pictures of the pseudolabels obtained by thresholding at 0.9, 0.5, and 0.1 quantiles of predicted probabilities of each class. Assignment Writing Service

    EXPERIMENT 2(b): Perform PHASE ONE of curriculum pseudo-labeling. Build a dataset using the model from 1b, with quantile threshold set to 0.5 (keeping the most confident 50% of all unlabeled data). Run train_super with suitable settings of seed, lr, n_epochs so you see reasonable convergence and low validation-set xent. Set l2pen_mag=200 because even though we have lots of "data", we want to avoid overconfident predicted probabilities. Assignment Writing Service

    EXPERIMENT 2(c): Perform PHASE TWO of curriculum pseudo-labeling. Build a dataset using the phase1 model from 2b, with quantile threshold set to 0.25 (keeping the most confident 75% of all unlabeled data). Run train_super with suitable settings of seed, lr, n_epochs so you see reasonable convergence and low validation-set xent. Again, set l2pen_mag=200 because even though we have lots of "data", we want to avoid overconfident predicted probabilities. Assignment Writing Service

    Tasks for Report

    Figure 2(a) Show the dataset visualization figure resulting from experiment 2a. The purpose is to sanity check your pseudolabel dataset construction. No caption necessary. Assignment Writing Service

    Figure 2(b) Show two-panel visualization (trace plot of losses, decision boundary) from the best training run of Experiment 2(b). No caption necessary. Assignment Writing Service

    Figure 2(c) Show two-panel visualization (trace plot of losses, decision boundary) from the best training run of Experiment 2(c). No caption necessary. Assignment Writing Service

    Short answer 2(d) Reflect on the similarities and differences between the decision boundaries seen here in Problem 2 (with pseudolabels), and the earlier boundary in Fig 1b (which only used the labeled set). Try to align what you see with conceptual knowledge of how pseudolabeling works. Hint: How do your results comparing supervised and psuedo-label semi-supervised learning contrast with Fig. 1 of the Ouali et al. paper we read? Assignment Writing Service

    Problem 3: Self-supervised learning via SimCLR

    Worth 45 out of 100 points Assignment Writing Service

    Tasks for Code Implementation

    Target time for these steps: 3 hr. Assignment Writing Service

    In losses_simclr.py, complete the following tasks: Assignment Writing Service

    • Task 3(i): Complete the todos in calc_self_loss_for_batch
    • Task 3(ii): Implement calc_simclr_loss__forloop
    • Task 3(iii): (optional, recommended for speed) Implement calc_simclr_loss__fast

    That last task is optional, focusing on how to speed up Python/PyTorch code via using built-in functions that are vectorized, avoiding any for loops. You can skip this step, but note your later experiments might be much slower, with training for 50 epochs taking several minutes rather than a few seconds. Assignment Writing Service

    Next, skim train_self.py to see how we perform training with a self-supervised objective. You don't need to edit anything here, but you should understand how it works. Assignment Writing Service

    Tasks for Experiment Execution

    Target time for these steps: 2 hr. Assignment Writing Service

    Step through the Problem 3 section of hw2.ipynb to achieve the following. Assignment Writing Service

    EXPERIMENT 3(a) : Using your SimCLR loss implementation and the provided train_self module, fit an MLP encoder to the unlabeled half-moons dataset (all U=2048 examples). You'll want to explore lrn_epochs, and seed to find a setting that works well (low training set loss, we're looking for values less than 4.0 after convergence). Assignment Writing Service

    EXPERIMENT 3(b) : Visualize the learned representations for your best SimCLR model from 3(a), and compare these to the best supervised model from 1b (using the smaller data) and from 1a (bigger data). Use provided code in hw2.ipynb. Assignment Writing Service

    EXPERIMENT 3(c) : Freeze the learned representations from 3a, via a call to set_trainable_layers. Next, call train_super.train_model to train a linear output "classifier head" to classify on the provided tr_loader and va_loader. Keep l2pen_mag=0.0, and find reasonable values of other settings (lr, n_epochs, seed). Assignment Writing Service

    Tasks for Report

    In your submitted report, include: Assignment Writing Service

    Figure 3a with caption : Plot the loss over time for your best SimCLR runs. In the caption, describe your strategy for (1) selecting a seed and (2) selecting a learning rate. Assignment Writing Service

    Figure 3b with caption : Show side-by-side the unit-circle embeddings for the test set from (1) your best SimCLR model from 3a, (2) the best supervised model from 1b, and (3) the best supervised model from 1a, using code provided in hw2.ipynb. In the caption, please reflect on the differences between the panels. What about how each method is trained leads to the embeddings you see? Assignment Writing Service

    Figure 3c with caption: Show the two-panel visualization (loss traces on left, decision boundary on right) for your fine-tuned SimCLR classifier from 3c. Assignment Writing Service

    Problem 4: Head-to-head comparison

    Worth 5 out of 100 points Assignment Writing Service

    Tasks for experiment execution

    In hw2.ipynb, compute and report the test-set cross-entropy (base2) and accuracy for each of Assignment Writing Service

    • best supervised model from 1(b)
    • best semi-supervised model from 2(c)
    • best self-supervised model plus fine-tuning from 3(c)

    Tasks for Report

    In your submitted report, include: Assignment Writing Service

    Table 4(a): Report the final cross entropy and accuracy of each model on the test set. No caption is necessary. Assignment Writing Service

    Short answer 4(b) : Include the code for your best implementation of calc_simclr_loss (either forloop or fast versions). Use the provided style in hw2_template.tex. Assignment Writing Service

    Credits

    Problem 3 on SimCLR adapted in part from Assignment Writing Service

    • Homework 4 of CS 1678/2078 at Pitt: https://people.cs.pitt.edu/~kovashka/cs1678_sp24/hw4.html Assignment Writing Service

    • Q4 of Homework 3 of CS 231n at Stanford: https://cs231n.github.io/assignments2024/assignment3/#q4-self-supervised-learning-for-image-classification Assignment Writing Service

    • Philip Lippe's tutorial notebook on SimCLR: https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial17/SimCLR.html Assignment Writing Service

    联系辅导老师!
    私密保护
    WeChat 微信
    Tufts代写,CS152代写,Learning from Limited Labeled Data代写,SSL to the Moon代写,Python代写,Machine Learning代写,supervised training代写,pseudo-labeling代写,SimCLR代写,Tufts代编,CS152代编,Learning from Limited Labeled Data代编,SSL to the Moon代编,Python代编,Machine Learning代编,supervised training代编,pseudo-labeling代编,SimCLR代编,Tufts代考,CS152代考,Learning from Limited Labeled Data代考,SSL to the Moon代考,Python代考,Machine Learning代考,supervised training代考,pseudo-labeling代考,SimCLR代考,Tufts代做,CS152代做,Learning from Limited Labeled Data代做,SSL to the Moon代做,Python代做,Machine Learning代做,supervised training代做,pseudo-labeling代做,SimCLR代做,Tuftshelp,CS152help,Learning from Limited Labeled Datahelp,SSL to the Moonhelp,Pythonhelp,Machine Learninghelp,supervised traininghelp,pseudo-labelinghelp,SimCLRhelp,Tufts作业代写,CS152作业代写,Learning from Limited Labeled Data作业代写,SSL to the Moon作业代写,Python作业代写,Machine Learning作业代写,supervised training作业代写,pseudo-labeling作业代写,SimCLR作业代写,Tufts编程代写,CS152编程代写,Learning from Limited Labeled Data编程代写,SSL to the Moon编程代写,Python编程代写,Machine Learning编程代写,supervised training编程代写,pseudo-labeling编程代写,SimCLR编程代写,Tufts作业答案,CS152作业答案,Learning from Limited Labeled Data作业答案,SSL to the Moon作业答案,Python作业答案,Machine Learning作业答案,supervised training作业答案,pseudo-labeling作业答案,SimCLR作业答案,