● Done. Pushed to https://github.com/Maya-AI-ML-Trainings-Cookbooks/maya-tts-llama32-1b-pretraining (private). - 143 files, clean single-commit history - .env included (with HF token, wandb key, R2 credentials) - All data files, checkpoints, outputs, wandb logs, llamafactory_ref excluded - Repo is private under the org ● Done. Private repo created and pushed: https://github.com/Maya-AI-ML-Trainings-Cookbooks/qwen3-asr-1.7b-phase2-sft - 51 files, fresh history (no old commits) - Includes .env with R2/wandb creds - Excludes: checkpoints, model weights, wandb logs, parquets, tar files, __pycache__ - Private, under Maya-AI-ML-Trainings-Cookbooks org ● Done. Repo created and pushed: https://github.com/Maya-AI-ML-Trainings-Cookbooks/maya-asr-hybrid-fastconformer-rnnt-stage1 - Private repo under Maya-AI-ML-Trainings-Cookbooks org - 80 files pushed (all code, configs, scripts, tests, .env) - Excluded: data/, artifacts/, experiments/, pretrained/, models/, tokenizers/ (200+ GB of large files) - Clean single commit, no prior history ● Done. Pushed to https://github.com/Maya-AI-ML-Trainings-Cookbooks/gemma3n-E2B-asr-ft (private). 534 files, 64,740 lines. Includes: - All scripts, tools, configs, docs, benchmarks, artifacts (JSON/stats only) - Full LlamaFactory vendor with all patches (tar loader, FA2, buffer repair, collator) - No data files (parquets, tars, model weights, checkpoints, logs, venv excluded)