Skip to content

Conversation

@alex000kim
Copy link
Contributor

@alex000kim alex000kim commented Nov 15, 2025

Simplifies Llama 3.1 LoRA finetuning by generating platform-specific configs dynamically using tune cp instead of maintaining static config files.
Currently, using static configs breaks this example on certain platforms/GPUs.

Changes

  • Remove configs/ folder and static YAML files
  • Generate platform-specific configs at runtime with tune cp llama3_1/${MODEL_SIZE}_lora
  • Update lora.yaml to use uv for dependencies

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant