We’re excited to introduce LFM2, the second generation of Liquid Foundation Models—small, fast, and open-source models built for edge deployment and real-world performance.
Our 350M, 700M, and 1.2B models outperform larger peers in speed, memory efficiency, and instruction-following quality—while running entirely on local hardware. LFM2 is built from first principles using Liquid Time-Constant Networks and optimized for CPUs, NPUs, and embedded systems. You can explore LFM2 here on Liquid Playground.
Our full library of first-generation LFMs (3B, 7B, and 40B) is also still available, giving you a wide range of models to work with depending on your goals—whether you’re testing performance, experimenting with architectures, or building custom edge deployments.
© 2025 Liquid AI. All rights reserved.