Bobbie-model Link May 2026

The research collective has hinted at a 13B version with Mixture of Depths (MoD) later this year. Until then, Bobbie-7B deserves a spot in your evaluation pipeline.

They explicitly filtered out any data containing eval benchmark examples (MMLU, GSM8K, HumanEval) using 13-gram overlap detection. This means Bobbie's benchmarks are likely not contaminated. 4. Performance Benchmarks We ran Bobbie-7B-Instruct against Llama-3-8B-Instruct and Mistral-7B-v0.3 on an RTX 4090. bobbie-model

If you’ve been following the open-source LLM space, you’ve likely memorized the specs of Llama 3, Mixtral, and Qwen. But a new contender has been quietly gaining traction in the "small model" category: . The research collective has hinted at a 13B

messages = [ "role": "user", "content": "Summarize this 20k token document..." ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) output = model.generate(inputs, max_new_tokens=512, temperature=0.7) print(tokenizer.decode(output[0][inputs.shape[1]:])) Bobbie works out-of-the-box with vLLM 0.6.0+: This means Bobbie's benchmarks are likely not contaminated


ConSULTATION

Fill out the form below, and we will be in touch shortly.
  • Should be Empty: