Spaces:
Running
Running
File size: 951 Bytes
82a8030 b5ce8e6 07df8f6 809ea80 82a8030 07df8f6 2e73408 82a8030 87372b9 eae0d70 4a630f0 d3a50ce cba0b99 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
---
title: Fluid Inference
emoji: 💻
colorFrom: indigo
colorTo: purple
sdk: static
pinned: false
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/6722a3d5150ed6c830d8f0cd/8Npu9X2Ilmo7-0A5Bg204.png
---
We're a team that set out to build a local-first consumer AI apps, but after thousands of users and 6 months, we realized the hardware and software aren't there yet. Running near-realtime workloads on consumer CPUs and GPUs can be too slow and drains battery life for most consumer hardware.
While some solutions exist for running local AI models on AI accelerators, most are closed source or only partially open, which we found frustrating. Rather than wait for others to solve this problem, we decided to tackle it ourselves and share our models and SDKs with everyone.
If you have questions or requests for models, join our [Discord](https://discord.gg/WNsvaCtmDe) or visit our [Github](https://github.com/FluidInference) |