--- title: README emoji: 📚 colorFrom: purple colorTo: blue sdk: static pinned: false --- Welcome to the llmware HuggingFace page. We believe that the ascendence of LLMs creates a major new application pattern and data pipelines that will be transformative in the enterprise, especially in knowledge-intensive industries. Our open source research efforts are focused both on the new "ware" ("middleware" and "software" that will wrap and integrate LLMs), as well as building high-quality automation-focused enterprise Agent, RAG and embedding small specialized language models. Our model training initiatives fall into four major categories: **SLIMs** - small, specialized function calling models for stacking in multi-model, Agent-based workflows -- [SLIMs](https://medium.com/@darrenoberst/slims-small-specialized-models-function-calling-and-multi-model-agents-8c935b341398) **BLING/DRAGON** - highly-accurate fact-based question-answering models -- [small model accuracy benchmark](https://medium.com/@darrenoberst/best-small-language-models-for-accuracy-and-enterprise-use-cases-benchmark-results-cf71964759c8) | [our journey building small accurate language models](https://medium.com/@darrenoberst/building-the-most-accurate-small-language-models-our-journey-781474f64d88) **Industry-BERT** - industry fine-tuned embedding models **Private Inference** - Self-Hosting, Packaging and Quantization - GGUF, ONNX, OpenVino Please check out a few of our recent blog postings related to these initiatives: [thinking does not happen one token at a time](https://medium.com/@darrenoberst/thinking-does-not-happen-one-token-at-a-time-0dd0c6a528ec) | [rag instruct test dataset](https://medium.com/@darrenoberst/how-accurate-is-rag-8f0706281fd9) | [llmware emerging stack](https://medium.com/@darrenoberst/the-emerging-llm-stack-for-rag-deee093af5fa) | [becoming a master finetuning chef](https://medium.com/@darrenoberst/6-tips-to-becoming-a-master-llm-fine-tuning-chef-143ad735354b) Interested? [Join us on Discord](https://discord.gg/MhZn5Nc39h)