EDGE AI Working Group Wiki

Working Group information for the edge AI community

EDGE AI FOUNDATION
  • Home
  • Getting Started
  • Join the Foundation ↗

Working Groups

  • Academic Industry Partnership 1
  • Audio AI 5
  • Career 3
  • Commercialization 5
  • Datasets & Benchmarks 4
  • Defense 3
  • Generative Edge AI 20
  • Marketing & Industry 4
  • Neuromorphic 9
  • Node Learning 3

Resources

  • EDGE AI Labs ↗
  • Tools & Libraries
  • Standards & Specs
  • Glossary
  • EDGE AI Certification ↗
  • Calendar ↗

Community

  • EDGE AI Discord ↗
  • EDGE AI Livestreams ↗
  • EDGE AI Events ↗
  • EDGE AI YouTube ↗
  • EDGE AI LinkedIn ↗
Log In
Wiki / Generative Edge AI / Liquid.AI MoE

Liquid.AI MoE

Updated Feb 25, 2026 Generative Edge AI

LFM2-24B-A2B, a 24B Mixture-of-Experts model with 2.3B parameters active per token, built on Liquid AI hybrid, hardware-aware LFM2 architecture.

By activating only the most relevant parameters at runtime, LFM2-24B-A2B delivers large-model capability with fast, memory-efficient behavior in a 32GB, 2B-active footprint.

https://huggingface.co/LiquidAI/LFM2-24B-A2B-GGUF

Check ollama https://ollama.com/search?o=newest

© 2026 EDGE AI FOUNDATION. All rights reserved.
Main Website Privacy Contact Cookie Settings

On This Page

    Related Articles

    • Gen Edge AI Forum 5 Agenda April 21-22 2026

      Generative Edge AI

    • PrismML 1-bit Bonsai LM

      Generative Edge AI

    • From Feasibility to Ecosystems:How Generative AI at the Edge Has Evolved

      Generative Edge AI

    • Benchmarking Small Language Models on an Industry-grade, High-end Microcontroller

      Generative Edge AI

    Last updated

    Feb 25, 2026

    Working group

    Generative Edge AI

    Contributors

    • Danilo Pau
    EAIF Logo

    EDGE AI Working Group Wiki is proudly powered by WordPress