Llama3 just got ears

Alan Dao
Bach Vu
Rex Ha

We’re excited to share llama3-s v0.2 (opens in a new tab), our latest multimodal checkpoint with improved speech understanding.

Demo

A realtime demo of Llama3-Speech (23th Aug 2024 checkpoint): the MLLM listens to human speech and responds in text

Llama3-s v0.2 consistently performs across multiple Speech Understanding benchmarks (see Results). While more analysis is needed, we’re excited to share this progress with the community and get feedback.

You can try it for yourself:

*Inference may slow/queued due to shared compute

*For this round, please ask questions in English and keep them under 10 seconds long. This is due to our model's limitation in being trained on audio prompts with fewer than 500 tokens, which we plan to address in a future update.

This post shares results and methodology behind an Aug 20th checkpoint. As always, this is just the beginning, and we need your ideas to push this research further.

📚

💡 We invite you to join llama3-s (opens in a new tab): an ongoing, open-source, and open-data research experiment teaching llama3 to listen. See motivation (opens in a new tab).

Architecture

In a previous post (opens in a new tab), we shared llama3-s v0.1 (opens in a new tab), an early-fusion experiment where we instruct-tuned llama3 on encodec’s (opens in a new tab) acoustic tokens [fig 1]. While we observed some transitivity between the LLM’s text and new audio tokens, there were imminent limitations like the lack of generalization to non-synthetic voices, among other issues (opens in a new tab).

acoustic Fig 1. our previous acoustic tokens early-fusion experiment

semantic Fig 2: the current approach is early-fusion with semantic tokens

For llama3-s v0.2, we adapted llama3.1 using early-fusion with semantic tokens, inspired by community feedback (opens in a new tab) [fig 2]. Our goal is to leverage the benefits of semantic tokens, such as simplicity, better compression, and consistent speech-feature extraction, as demonstrated by WhisperVQ (opens in a new tab). We can always scale up to a hybrid approach and reintroduce acoustic features as needed, given more data and compute resources.

You can learn more about our comparison of semantic and acoustic tokens here.

Training

Stage 1: pre-training on real speech

We found it useful to pre-train llama3.1 on continuous speech, through rough ablation experimentation. This enhanced llama3’s ability to generalize across semantic tokens.

Data: We used the MLS-10k dataset (opens in a new tab) (10 hours of unlabeled, multilingual human speech courtesy of OpenSLR) to pre-train llama3.1 8b on next token prediction (code here (opens in a new tab)).

pretrain

Training: The pretraining totaled 5k steps and took over 30 hours*. We used Torchtune’s (opens in a new tab) fully sharded data parallels, an AdamW Fused optimizer, along with the following parameters:

ParameterContinual Training
Epoch1
Global batch size80
Learning Rate2e-4
Learning SchedulerLambdaLR with warmup
OptimizerAdamW Fused (opens in a new tab)
Warmup Steps20
Weight Decay0.01
Gradient CheckpointingFull
Max length512
Precisionbf16

The learning rate schedule is as follows, starting with a relatively high LR for sufficient warmup.

lr

Loss: After 5000 steps, loss converged at just below 2, at which point we simply moved onto the next stage.

loss1.99

Compute: We used a single 10x RTX A6000 node to train. We actually own and operate our own A6000’s node that we’ve fondly named “Boba”. So, to get a rough cost calculation for this stage, let us assume a higher end rate of USD0.80 per GPU, totaling $240 for the pre-train.

image.png

MMLU Eval: We measured MMLU at this stage to get a sense of degradation. 0-shot MMLU dropped from 0.63 → 0.46, a 30% decrease that we hoped to recover in the subsequent stage.

Stage 2: instruct tuning on a mixture of synthetic data

For the second stage of training, we instruct-tuned llama3 with interleaving synthetic data.

Data: We use a synthetically generated speech dataset (opens in a new tab). This speech data is then semantically encoded with WhisperVQ (opens in a new tab) from WhisperSpeech (opens in a new tab). This dataset was then interleaved to have 70% speech instruction prompts and 30% speech transcription prompts.

instruct

Training:  The instruct tuning was done with fsdp2, mixed-precision, with the final weights in bf16. We used the AdamW Fused optimizer, a global batchsize of 128 (mini-batches of 2-4), a 0.5e-4 LR, and Cosine learning scheduler. You can find the full steps to reproduce our training here (opens in a new tab).

ParameterContinual Training
Epoch1
Global batch size128
Learning Rate0.5e-4
Learning SchedulerCosine with warmup
OptimizerAdamW Fused (opens in a new tab)
Warmup Steps73
Weight Decay0.005
Gradient CheckpointingFull
Max length1024
Precisionbf16

Compute: The training took place over 32 hours on 8x H100s, spanning 5 epochs at 6 hours & 7261 steps per epoch. At $2.20 per H100 per hour, we estimate this run to have costed $563, not including several failed runs due to troubleshooting.

Model Flops Utilization (MFU) per step is around 20-25% (opens in a new tab), which is hugely optimizable. It’s also worth mentioning that we intentionally overtrained at this stage to run some grokking (opens in a new tab) experiments.

In total, both stages of training was achievable under $600, with the entire experiment coming under $2800, accounting for various data pipelines and failed runs due to bugs and infrastructure interruptions.

Results

We found epoch 3 to be performant and is our current demo checkpoint.

e3

AudioBench Eval: AudioBench (opens in a new tab) is a June 2024 benchmark designed to evaluate audio large language models (AudioLLMs). It measures speech capabilities, in addition to ASR, transcription, etc., through a compilation of many open datasets.

Model BenchOpen-hermes Instruction Audio (opens in a new tab)(GPT-4-O judge 0:5)Alpaca Instruction Audio (opens in a new tab)(GPT-4-O judge 0:5)Librispeech clean v2 (opens in a new tab) (ASR) (WER score)
Llama3.1-s-v2-epoch-1 (opens in a new tab)3.022.8794.66%
Llama3.1-s-v2-epoch-2 (opens in a new tab)3.03.2260.80%
Llama3.1-s-v2-epoch-3 (opens in a new tab)3.453.5349.98%
Llama3.1-s-v2-epoch-4 (opens in a new tab)3.472.9360.05%
Llama3.1-s-v2-epoch-5 (opens in a new tab)3.343.0169.07%

Our training dataset didn’t contain Alpaca Instruction. At epoch 3, llama3-s v.02 achieved an average score of 3.53 on the ALPACA-Audio eval, which seems to beat SALMONN, Qwen-Audio and WavLLM.

audiobench Fig 3: SOTA models evaluated on AudioBench

The overfitting started in epoch 4. It is interesting to observe that OpenHermes-Audio eval remaining high after this epoch, likely indicative of some training data contamination. Thus we are inclined to disregard the OpenHermes-Audio criterion.

This checkpoint is bad at ASR, which wasn’t our target, but we included it for good measure.

MMLU eval: Base llama3.1 has an MMLU score of 0.6380, and degrades to the following across our epochs.

MMLUDegradation (%)
Epoch 10.513919.45
Epoch 20.462127.57
Epoch 30.467626.71
Epoch 40.472026.02
Epoch 50.470326.29

Next Steps

Llama 3.1 v0.2 is still in its early development and has limitations:

  • Model is sensitive to bad compression on the incoming audio
  • Model cannot listen to >10s audio and get confused
  • Very weak to nonsensical audio and will need to be trained on noise

Additionally, our current approach, a Type D.1 (opens in a new tab) multimodal model, has well studied limitations. Namely, there are challenges to scaling the tokenizers and a lack of fine-grained control of how modality information flows in the model. This current approach possibly requires more training data down the road as a tradeoff for its architectural simplicity.

For now, our next steps are as follows:

  • Curate training dataset better, longer prompts, filtering out non-speech-perfect data
  • A more efficient synthetic data pipeline that skips redundant layers
  • Establishing cascaded system baseline benchmarks to evaluate computational and latency improvements
  • Exploring other model architectures that are more efficient

Long term, we aim to develop an open, multi-turn speech model for llama3-s that excels in low-resource languages, with a focus on improving generalization across ASEAN's diverse accents and dialects. Achieving this will necessitate a significant and sustained data collection effort.

Acoustic v Semantic

💡

tldr: Acoustic tokens, though more rich in audio features, requires large training data and computational resources.

The loss on our acoustic tokens pre-training were largely stuck at 4.

loss4

Where as pretraining on semantic tokens converged to ~1.8 after 7k steps.

loss1.8

Acknowledgements


Open Call

We’re calling on LLM researchers and audio experts to experiment with us.

Join the Discord fun:

We believe that collaborative, open research can accelerate progress in this exciting field. Whether you're an experienced researcher or an enthusiastic newcomer, your contribution could be valuable.

💡

At Homebrew Computer Company (opens in a new tab), we like smaller, “edge friendly” models that are privacy preserving and feasible to train on energy-efficient clusters. Read more about our AI philosophy here (opens in a new tab).


App screenshots

The Soul of a New Machine

To stay updated on all of Homebrew's research, subscribe to The Soul of a New Machine