# Maya3 Transcription Pipeline

Audio transcription pipeline for Indian languages using Google Gemini AI models.

## Features

- **R2 Cloud Storage Integration**: Downloads audio tar files from Cloudflare R2
- **Supabase Integration**: Fetches video language metadata from database
- **Audio Processing**: Handles segment splitting with configurable duration limits
- **Multi-Model Support**: Works with Gemini 3 Pro, Flash, and 2.x models
- **Structured Output**: Four transcription formats per segment
- **Full Parameter Control**: Fine-grained control over all pipeline aspects

## Setup

```bash
# Create virtual environment
python3 -m venv venv
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Ensure .env file has required credentials:
# - R2_ENDPOINT_URL, R2_BUCKET, R2_ACCESS_KEY_ID, R2_SECRET_ACCESS_KEY
# - GEMINI_KEY
# - URL, SUPABASE_ADMIN (Supabase credentials)
```

## Quick Start

```bash
# Basic usage - process a video
python pipeline.py pF_BQpHaIdU --language Telugu

# Limit segments for testing
python pipeline.py pF_BQpHaIdU -n 5 --language Telugu

# Use different model
python pipeline.py pF_BQpHaIdU -m gemini-3-pro-preview -t high -n 3
```

## Pipeline Configuration

### Audio Processing Controls

| Parameter | Default | Description |
|-----------|---------|-------------|
| `max_segment_duration_sec` | 10.0 | Hard limit - segments longer than this are split |
| `min_segment_duration_sec` | 1.0 | Skip segments shorter than this |
| `chunk_overlap_sec` | 0.0 | Overlap when splitting (helps word boundaries) |

### Segment Selection Controls

| Parameter | Default | Description |
|-----------|---------|-------------|
| `max_segments` | None | Limit total segments processed (for testing) |
| `segment_start_index` | 0 | Start from this segment index |
| `skip_short_segments` | True | Skip segments < min_duration |

### Model Selection

| Model | Thinking Levels | Cost Tier |
|-------|----------------|-----------|
| `gemini-3-pro-preview` | low, high | Premium |
| `gemini-3-flash-preview` | minimal, low, medium, high | Standard |
| `gemini-2.5-pro` | N/A (budget-based) | Premium |
| `gemini-2.5-flash` | N/A (budget-based) | Standard |
| `gemini-2.5-flash-lite` | N/A (budget-based) | Lite |
| `gemini-2.0-flash` | N/A | Standard |

## Output Format

Each segment produces four transcription formats:

1. **native_transcription**: Pure native script, no punctuation
2. **native_with_punctuation**: Native script with minimal punctuation
3. **code_switch**: Mixed script preserving language switching
4. **romanized**: Complete Roman/Latin script transliteration

Example output structure:
```json
{
  "video_id": "pF_BQpHaIdU",
  "model": "gemini-3-flash-preview",
  "thinking_level": "high",
  "language": "Telugu",
  "results": [
    {
      "segment_id": "segment_001.flac",
      "chunk_index": 0,
      "duration_sec": 8.5,
      "transcription": {
        "native_transcription": "...",
        "native_with_punctuation": "...",
        "code_switch": "...",
        "romanized": "..."
      },
      "model_used": "gemini-3-flash-preview",
      "processing_time_sec": 2.3
    }
  ]
}
```

## Python API

```python
from pipeline import run_pipeline, PipelineConfig, TranscriptionPipeline

# Simple usage
result = run_pipeline(
    video_id="pF_BQpHaIdU",
    language="Telugu",
    model="gemini-3-flash-preview",
    thinking_level="high",
    max_segments=5
)

# Full control with config
config = PipelineConfig(
    video_id="pF_BQpHaIdU",
    language="Telugu",
    max_segment_duration_sec=10.0,
    min_segment_duration_sec=1.0,
    max_segments=10,
    model="gemini-3-flash-preview",
    thinking_level="high",
    save_intermediate=True,
    batch_size=5
)

pipeline = TranscriptionPipeline(config)
result = pipeline.run()

# Direct access to backend modules
from src.backend import (
    download_video_segments,
    get_video_language,
    AudioProcessor,
    GeminiTranscriber,
    TranscriptionConfig
)
```

## Testing Multiple Models

```python
from pipeline import test_models

# Compare models on same video
results = test_models(
    video_id="pF_BQpHaIdU",
    language="Telugu",
    models=["gemini-3-flash-preview", "gemini-3-pro-preview"],
    thinking_levels=["high"],
    max_segments=3
)

for model, result in results.items():
    print(f"{model}: {result.total_processing_time_sec:.1f}s")
```

## Module Structure

```
maya3_transcribe/
├── pipeline.py              # Main transcription pipeline (entry point)
├── requirements.txt         # Core dependencies
├── README.md
├── .env                     # Credentials (R2, Gemini, Supabase)
├── src/
│   ├── backend/             # Transcription pipeline modules
│   │   ├── __init__.py
│   │   ├── config.py        # Configuration and env loading
│   │   ├── r2_storage.py    # R2 cloud storage client
│   │   ├── supabase_client.py # Supabase metadata client
│   │   ├── audio_processor.py # Audio segmentation/chunking
│   │   ├── transcription_schema.py # Pydantic schemas
│   │   └── gemini_transcriber.py # Gemini API integration
│   │
│   └── validators/          # Validation pipeline modules
│       ├── __init__.py
│       ├── base.py          # Base validator class
│       ├── runner.py        # Validator orchestrator
│       ├── indicwav2vec_validator.py  # IndicWav2Vec (AI4Bharat)
│       ├── indicmfa_validator.py      # Montreal FA + Indic models
│       ├── vistaar_validator.py       # Vistaar/Whisper (AI4Bharat)
│       ├── indic_conformer_validator.py # IndicConformer (AI4Bharat)
│       └── requirements.txt # Validator dependencies
│
├── transcriptions/          # Transcription output
├── validation_results/      # Validation output
└── docs/                    # Documentation
```

## Validators Module

The validators module provides multiple models for transcription validation:

| Validator | Model | Purpose |
|-----------|-------|---------|
| `indicwav2vec` | AI4Bharat IndicWav2Vec | ASR + CTC alignment |
| `indicmfa` | Montreal Forced Aligner | Precise word alignment |
| `vistaar` | AI4Bharat Vistaar (Whisper) | ASR with word timestamps |
| `indic_conformer` | AI4Bharat IndicConformer | 600M multilingual ASR |

### Validator Usage

```python
from src.validators import ValidatorRunner

# Initialize with specific validators enabled
runner = ValidatorRunner(
    enable_indicwav2vec=True,
    enable_vistaar=True,
    enable_indicmfa=False,  # Requires MFA installation
    enable_indic_conformer=True,
    language="te"  # Telugu
)

# Validate single segment
result = runner.validate(
    audio_path="segment.flac",
    reference_text="reference transcription",
    language="te"
)

# Access individual validator results
for name, vr in result.results.items():
    print(f"{name}: {vr.transcription}")
    print(f"  Confidence: {vr.overall_confidence}")
    print(f"  Words: {len(vr.word_alignments)}")

# Batch validation
results = runner.validate_batch(
    audio_paths=["seg1.flac", "seg2.flac"],
    reference_texts=["text1", "text2"]
)

# Save results
runner.save_results(results)
runner.cleanup()
```

### Install Validator Dependencies

```bash
# Core ML dependencies
pip install torch torchaudio transformers

# For Vistaar (Whisper-based)
pip install faster-whisper

# For IndicMFA (requires conda)
conda install -c conda-forge montreal-forced-aligner
pip install textgrid

# For IndicConformer (optional - NeMo)
pip install nemo_toolkit[asr]
```

## Progress Log

### 2026-02-03
- Initial pipeline implementation
- R2 download module
- Supabase language lookup
- Audio processing with 10s hard limit
- Gemini transcription with structured output
- Multi-model support (Gemini 3 Pro/Flash, 2.x series)
- Tested successfully with video pF_BQpHaIdU (Telugu)
- Both gemini-3-flash-preview and gemini-2.5-flash working correctly
- Refactored into src/backend package structure
- Pipeline orchestrator remains at project root