Skip Navigation

I'm excited for dots.llm (142BA14B)!

github.com

GitHub - rednote-hilab/dots.llm1

  • It seems like it'll be the best local model that can be ran fast if you have a lot of RAM and medium VRAM.
  • It uses a shared expert (like deepseek and llama4) so it'll be even faster on partial offloaded setups.
  • There is a ton of options for fine tuning or training from one of their many partially trainined checkpoints.
  • I'm hoping for a good reasoning finetune. Hoping Nous does it.
  • It has a unique voice because it has very little synthetic data in it.

llama.CPP support is in the works, and hopefully won't take too long since it's architecture is reused from other models llamacpp already supports.

Are y'all as excited as I am? Also is there any other upcoming release that you're excited for?

3 comments