Liquid AI, an AI startup spun out from MIT, has introduced its first collection of generative AI fashions, which it refers to as Liquid Basis Fashions (LFMs).
“Our mission is to create best-in-class, clever, and environment friendly programs at each scale – programs designed to course of giant quantities of sequential multimodal knowledge, to allow superior reasoning, and to attain dependable decision-making,” Liquid defined in a submit.
In keeping with Liquid, LFMs are “giant neural networks constructed with computational models deeply rooted within the concept of dynamical programs, sign processing, and numerical linear algebra.” By comparability, LLMs are based mostly on a transformer structure, and by not utilizing that structure, LFMs are capable of have a a lot smaller reminiscence footprint than LLMs.
“That is significantly true for lengthy inputs, the place the KV cache in transformer-based LLMs grows linearly with sequence size. By effectively compressing inputs, LFMs can course of longer sequences on the identical {hardware},” Liquid wrote.
Liquid’s fashions are general-purpose and can be utilized to mannequin any kind of sequential knowledge, like video, audio, textual content, time collection, and indicators.
In keeping with the corporate, LFMs are good at basic and knowledgeable information, arithmetic and logical reasoning, and environment friendly and efficient long-context duties.
The areas the place they fall quick right now embrace zero-shot code duties, exact numerical calculations, time-sensitive info, human desire optimization methods, and “counting the r’s within the phrase ‘strawberry,’ ” the corporate stated.
At the moment, their foremost language is English, however in addition they have secondary multilingual capabilities in Spanish, French, German, Chinese language, Arabic, Japanese, and Korean.
The primary collection of LFMs embrace three fashions:
- 1.3B mannequin designed for resource-constrained environments
- 3.1B mannequin very best for edge deployments
- 40.3B Combination of Consultants (MoE) mannequin optimum for extra advanced duties
Liquid says it will likely be taking an open-science method with its analysis, and can brazenly publish its findings and strategies to assist advance the AI discipline, however is not going to be open-sourcing the fashions themselves.
“This permits us to proceed constructing on our progress and keep our edge within the aggressive AI panorama,” Liquid wrote.
In keeping with Liquid, it’s working to optimize its fashions for NVIDIA, AMD, Qualcomm, Cerebra, and Apple {hardware}.
customers can check out the LFMs now on Liquid Playground, Lambda (Chat UI and API), and Perplexity Labs. The corporate can be working to make them out there on Cerebras Interface as nicely.