Liquid.AI MoE

LFM2-24B-A2B, a 24B Mixture-of-Experts model with 2.3B parameters active per token, built on Liquid AI hybrid, hardware-aware LFM2 architecture.

By activating only the most relevant parameters at runtime, LFM2-24B-A2B delivers large-model capability with fast, memory-efficient behavior in a 32GB, 2B-active footprint.

https://huggingface.co/LiquidAI/LFM2-24B-A2B-GGUF

Check ollama https://ollama.com/search?o=newest