Glossary

2.2 LoRA Configuration

Implement fine-tuning using the HuggingFace `peft` and `trl` libraries. - Configure LoRA with the following as starting points (then experiment): - Rank (r): 8, 16, 32, 64. - Alpha: 16, 32 (typically 2x rank). - Target modules: attention projections (`q_proj`, `k_proj`, `v_proj`, `o_proj`) and optio

Learn More

Related Terms