# Lab 2 — ResNet-18 on CIFAR10 (HPML Spring 2026)

## Setup
```bash
source venv/bin/activate
```

## Run All Experiments (one command)
```bash
source venv/bin/activate
python lab2.py --cuda --task all && python lab2_torchscript.py --cuda
```

## Individual Exercises

### Part A

#### C1–C2: Training with timing (5 epochs, GPU, default SGD)
```bash
python lab2.py --cuda --task train
```

#### C3: I/O worker sweep (0, 4, 8, 12, 16, 20)
```bash
python lab2.py --cuda --task c3
```

#### C4: CPU vs GPU (use optimal workers from C3)
```bash
python lab2.py --cuda --task c4
```

#### C5: Optimizer comparison (GPU, optimal workers)
```bash
python lab2.py --cuda --task c5
```

#### C6: Without batch normalization
```bash
python lab2.py --cuda --task c6
```

### Part B: TorchScript (C7–C10)
```bash
python lab2_torchscript.py --cuda
```

### Extra Credit: C++ LibTorch Inference
```bash
# 1. Download LibTorch (matching PyTorch version)
cd extra_credit
wget https://download.pytorch.org/libtorch/cu124/libtorch-cxx11-abi-shared-with-deps-2.6.0%2Bcu124.zip
unzip libtorch-*.zip

# 2. Build
mkdir build && cd build
cmake -DCMAKE_PREFIX_PATH=../libtorch ..
make

# 3. Run (uses the scripted model saved by lab2_torchscript.py)
./inference ../../resnet18_scripted.pt          # CPU
./inference ../../resnet18_scripted.pt --gpu    # GPU
```
