Репост из: Github LLMs
Slamming: Training a Speech Language Model on One GPU in a Day
19 Feb 2025 · Gallil Maimon, Avishai Elmakies, Yossi Adi ·
We introduce Slam, a recipe for training high-quality Speech Language Models (SLMs) on a single academic GPU in 24 hours. We do so through empirical analysis of model initialisation and architecture, synthetic training data, preference optimisation with synthetic data and tweaking all other components. We empirically demonstrate that this training recipe also scales well with more compute getting results on par with leading SLMs in a fraction of the compute cost. We hope these insights will make SLM training and research more accessible. In the context of SLM scaling laws, our results far outperform predicted compute optimal performance, giving an optimistic view to #SLM feasibility. See code, data, models, samples at - https://pages.cs.huji.ac.il/adiyoss-lab/slamming .
Paper: https://arxiv.org/pdf/2502.15814v1.pdf
Code: https://github.com/slp-rl/slamkit
https://t.me/deep_learning_proj
19 Feb 2025 · Gallil Maimon, Avishai Elmakies, Yossi Adi ·
We introduce Slam, a recipe for training high-quality Speech Language Models (SLMs) on a single academic GPU in 24 hours. We do so through empirical analysis of model initialisation and architecture, synthetic training data, preference optimisation with synthetic data and tweaking all other components. We empirically demonstrate that this training recipe also scales well with more compute getting results on par with leading SLMs in a fraction of the compute cost. We hope these insights will make SLM training and research more accessible. In the context of SLM scaling laws, our results far outperform predicted compute optimal performance, giving an optimistic view to #SLM feasibility. See code, data, models, samples at - https://pages.cs.huji.ac.il/adiyoss-lab/slamming .
Paper: https://arxiv.org/pdf/2502.15814v1.pdf
Code: https://github.com/slp-rl/slamkit
https://t.me/deep_learning_proj