Qwen2.5 LLM Toolbox
$50
$50
https://schema.org/InStock
usd
Benjamin Marie
This toolbox already includes 16 Jupyter notebooks specially optimized for Qwen2.5. The logs of successful runs are also provided. More notebooks will be regularly added.
To run the code in the toolbox, CUDA 12.4 and PyTorch 2.4 are recommended. PyTorch 2.5 might already work but I didn't test it yet.
Toolbox content
- Supervised Fine-Tuning with Chat Templates (5 notebooks)
- Full fine-tuning
- LoRA fine-tuning
- QLoRA fine-tuning with Bitsandbytes quantization
- QLoRA fine-tuning with AutoRound quantization
- LoRA and QLoRA fine-tuning with Unsloth
- Preference Optimization (3 notebooks)
- Full DPO training (TRL and Transformers)
- DPO training with LoRA (TRL and Transformers)
- ORPO training with LoRA (TRL and Transformers)
- Quantization (3 notebooks)
- AWQ
- AutoRound (with code to quantize Qwen 2.5 72B)
- GGUF for llama.cpp
- Inference with Qwen2.5 Instruct and Your Own Fine-tuned Qwen2.5 (4 notebooks)
- Transformers with and without a LoRA adapter
- vLLM offline and online inference
- Ollama (not released yet)
- llama.cpp
- Merging (3 notebooks)
- Merge a LoRA adapter into the base model
- Merge a QLoRA adapter into the base model
- Merge several Qwen2.5 models into one with mergekit (not released yet)
You can find all the toolbox and more content created by The Kaitchup here:
https://newsletter.kaitchup.com/p/ai-toolboxes
Note: If you are a subscriber to The Kaitchup Pro, you already have access to the repository and all the other toolboxes. If you just subscribed, you will also receive an access token in a few hours. Contact The Kaitchup (https://newsletter.kaitchup.com/) if you don't receive it within the 24 hours following your subscription.
30-day refund guarantee
All the notebooks and access to the repository
Add to wishlist