Skip to content

PaperCodex

Subscribe

Supervised Fine-tuning

360-LLaMA-Factory: Plug-and-Play Sequence Parallelism for Long-Context SFT and DPO Without Rewriting Your Workflow

360-LLaMA-Factory: Plug-and-Play Sequence Parallelism for Long-Context SFT and DPO Without Rewriting Your Workflow 571

Training large language models (LLMs) on long sequences—whether for document-level instruction tuning, multi-modal reasoning, or complex alignment tasks—has long been…

01/05/2026Direct Preference Optimization, Long-Context Training, Supervised Fine-tuning
LlamaFactory: Fine-Tune 100+ Language Models Effortlessly—No Coding Required

LlamaFactory: Fine-Tune 100+ Language Models Effortlessly—No Coding Required 63856

Fine-tuning large language models (LLMs) used to be a complex, time-consuming endeavor—requiring deep expertise in deep learning frameworks, custom code…

12/12/2025Multimodal Learning, Preference Alignment, Supervised Fine-tuning
Copyright © 2026 PaperCodex.
  • Facebook
  • YouTube
  • Twitter

PaperCodex