gpaasch commited on
Commit
db0b943
·
1 Parent(s): f6efb83

doesn't seem to be running

Browse files
Files changed (3) hide show
  1. README.md +44 -0
  2. requirements.txt +2 -0
  3. src/app.py +108 -2
README.md CHANGED
@@ -143,6 +143,50 @@ Judging will be conducted by representatives from sponsor partners and the Huggi
143
  * **LlamaIndex Docs**: [https://llamaindex.ai/docs](https://llamaindex.ai/docs)
144
  * **Mistral Model Hub**: [https://huggingface.co/mistral-ai/mistral-small](https://huggingface.co/mistral-ai/mistral-small)
145
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
146
  # About the Author
147
 
148
  **Graham Paasch** is an AI realist passionate about the coming AI revolution.
 
143
  * **LlamaIndex Docs**: [https://llamaindex.ai/docs](https://llamaindex.ai/docs)
144
  * **Mistral Model Hub**: [https://huggingface.co/mistral-ai/mistral-small](https://huggingface.co/mistral-ai/mistral-small)
145
 
146
+ ## Free Credits!
147
+
148
+ **Modal Labs Compute Credits** (\$250 per participant)
149
+ Monitor your GPU/CPU credit usage by logging into your Modal account and navigating to **Dashboard → Billing**:
150
+ [https://modal.com/dashboard](https://modal.com/dashboard) ([huggingface.co][1], [modal.com][2])
151
+
152
+ **Hugging Face API Credits** (\$25 per participant)
153
+ View your remaining credits and invoices on the Hugging Face billing dashboard:
154
+ [https://huggingface.co/settings/billing](https://huggingface.co/settings/billing) ([huggingface.co][1], [huggingface.co][3])
155
+
156
+ **Nebius AI Cloud Credits** (\$25 to first 3,300 participants)
157
+ Check your Nebius “Grants and promocodes” balance and detailed billing reports at:
158
+ [https://nebius.com/services/billing](https://nebius.com/services/billing) ([huggingface.co][1], [nebius.com][4])
159
+
160
+ **Anthropic Claude API Credits** (\$25 to first 1,000 participants)
161
+ Track your Claude usage and remaining credits in the Anthropic Console under **Settings → Billing**:
162
+ [https://console.anthropic.com/settings/billing](https://console.anthropic.com/settings/billing) ([huggingface.co][1])
163
+
164
+ **OpenAI API Credits** (\$25 to first 1,000 participants)
165
+ Monitor your API calls, token usage, and spend on the OpenAI Usage dashboard:
166
+ [https://platform.openai.com/account/usage](https://platform.openai.com/account/usage) ([huggingface.co][1], [platform.openai.com][5])
167
+
168
+ **Hyperbolic Labs API Credits** (\$15 to first 1,000 participants)
169
+ After logging in at the Hyperbolic AI Dashboard, go to **Settings → Billing** to view your credit balance and transaction history:
170
+ [https://app.hyperbolic.xyz](https://app.hyperbolic.xyz) ([huggingface.co][1], [docs.hyperbolic.xyz][6], [hyperbolic.xyz][7])
171
+
172
+ **Mistral AI API Credits** (\$25 to first 500 participants)
173
+ Sign in at the Mistral Console and navigate to **Workspace → Billing** to activate and monitor your credits:
174
+ [https://console.mistral.ai](https://console.mistral.ai) ([huggingface.co][1], [docs.mistral.ai][8])
175
+
176
+ **SambaNova AI Cloud Credits** (\$25 to first 250 participants)
177
+ Log in to SambaNova Cloud and check your **Billing & Usage** in the plans section:
178
+ [https://cloud.sambanova.ai/plans/billing](https://cloud.sambanova.ai/plans/billing) ([huggingface.co][1], [cloud.sambanova.ai][9])
179
+
180
+ [1]: https://huggingface.co/Agents-MCP-Hackathon "Agents-MCP-Hackathon (Agents-MCP-Hackathon)"
181
+ [2]: https://modal.com/ "Modal: High-performance AI infrastructure"
182
+ [3]: https://huggingface.co/docs/hub/billing?utm_source=chatgpt.com "Billing - Hugging Face"
183
+ [4]: https://nebius.com/services/billing?utm_source=chatgpt.com "Billing - Nebius"
184
+ [5]: https://platform.openai.com/account/usage?utm_source=chatgpt.com "Account Usage - OpenAI Platform"
185
+ [6]: https://docs.hyperbolic.xyz/docs/getting-started?utm_source=chatgpt.com "Hyperbolic API"
186
+ [7]: https://hyperbolic.xyz/blog/how-to-set-up-your-account-on-hyperbolic?utm_source=chatgpt.com "How to Set Up Your Account on Hyperbolic"
187
+ [8]: https://docs.mistral.ai/getting-started/quickstart/?utm_source=chatgpt.com "Quickstart | Mistral AI Large Language Models"
188
+ [9]: https://cloud.sambanova.ai/plans/billing?utm_source=chatgpt.com "Billing - SambaNova Cloud"
189
+
190
  # About the Author
191
 
192
  **Graham Paasch** is an AI realist passionate about the coming AI revolution.
requirements.txt CHANGED
@@ -23,3 +23,5 @@ requests # For MCP endpoint testing
23
 
24
  # system requirement for audio I/O
25
  ffmpeg-python
 
 
 
23
 
24
  # system requirement for audio I/O
25
  ffmpeg-python
26
+
27
+ psutil # For system resource detection
src/app.py CHANGED
@@ -1,10 +1,117 @@
1
  import os
 
 
2
  import gradio as gr
3
  from llama_index.core import Settings, ServiceContext
4
  from llama_index.embeddings.huggingface import HuggingFaceEmbedding
5
  from llama_index.llms.llama_cpp import LlamaCPP
6
  from parse_tabular import create_symptom_index
7
  import json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  # Configure embeddings globally
10
  Settings.embed_model = HuggingFaceEmbedding(
@@ -13,8 +120,7 @@ Settings.embed_model = HuggingFaceEmbedding(
13
 
14
  # Configure local LLM with LlamaCPP
15
  llm = LlamaCPP(
16
- model_url="https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF/resolve/main/mistral-7b-instruct-v0.1.Q4_K_M.gguf",
17
- model_path="models/mistral-7b-instruct-v0.1.Q4_K_M.gguf",
18
  temperature=0.7,
19
  max_new_tokens=256,
20
  context_window=2048
 
1
  import os
2
+ from pathlib import Path
3
+ from huggingface_hub import snapshot_download
4
  import gradio as gr
5
  from llama_index.core import Settings, ServiceContext
6
  from llama_index.embeddings.huggingface import HuggingFaceEmbedding
7
  from llama_index.llms.llama_cpp import LlamaCPP
8
  from parse_tabular import create_symptom_index
9
  import json
10
+ import torch
11
+ import psutil
12
+ import subprocess
13
+ from typing import Tuple, Dict
14
+
15
+ # Model options mapped to their requirements
16
+ MODEL_OPTIONS = {
17
+ "tiny": {
18
+ "name": "TinyLlama-1.1B-Chat-v1.0.Q4_K_M.gguf",
19
+ "repo": "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF",
20
+ "vram_req": 2, # GB
21
+ "ram_req": 4 # GB
22
+ },
23
+ "small": {
24
+ "name": "phi-2.Q4_K_M.gguf",
25
+ "repo": "TheBloke/phi-2-GGUF",
26
+ "vram_req": 4,
27
+ "ram_req": 8
28
+ },
29
+ "medium": {
30
+ "name": "mistral-7b-instruct-v0.1.Q4_K_M.gguf",
31
+ "repo": "TheBloke/Mistral-7B-Instruct-v0.1-GGUF",
32
+ "vram_req": 6,
33
+ "ram_req": 16
34
+ }
35
+ }
36
+
37
+ def get_system_specs() -> Dict[str, float]:
38
+ """Get system specifications."""
39
+ # Get RAM
40
+ ram_gb = psutil.virtual_memory().total / (1024**3)
41
+
42
+ # Get GPU info if available
43
+ gpu_vram_gb = 0
44
+ if torch.cuda.is_available():
45
+ try:
46
+ # Query GPU memory in bytes and convert to GB
47
+ gpu_vram_gb = torch.cuda.get_device_properties(0).total_memory / (1024**3)
48
+ except Exception as e:
49
+ print(f"Warning: Could not get GPU memory: {e}")
50
+
51
+ return {
52
+ "ram_gb": ram_gb,
53
+ "gpu_vram_gb": gpu_vram_gb
54
+ }
55
+
56
+ def select_best_model() -> Tuple[str, str]:
57
+ """Select the best model based on system specifications."""
58
+ specs = get_system_specs()
59
+ print(f"\nSystem specifications:")
60
+ print(f"RAM: {specs['ram_gb']:.1f} GB")
61
+ print(f"GPU VRAM: {specs['gpu_vram_gb']:.1f} GB")
62
+
63
+ # Prioritize GPU if available
64
+ if specs['gpu_vram_gb'] >= 4: # You have 6GB, so this should work
65
+ model_tier = "small" # phi-2 should work well on RTX 2060
66
+ elif specs['ram_gb'] >= 8:
67
+ model_tier = "small"
68
+ else:
69
+ model_tier = "tiny"
70
+
71
+ selected = MODEL_OPTIONS[model_tier]
72
+ print(f"\nSelected model tier: {model_tier}")
73
+ print(f"Model: {selected['name']}")
74
+
75
+ return selected['name'], selected['repo']
76
+
77
+ # Set up model paths
78
+ MODEL_NAME, REPO_ID = select_best_model()
79
+ BASE_DIR = os.path.dirname(os.path.dirname(__file__))
80
+ MODEL_DIR = os.path.join(BASE_DIR, "models")
81
+ MODEL_PATH = os.path.join(MODEL_DIR, MODEL_NAME)
82
+
83
+ def ensure_model():
84
+ # Create models/ directory if missing
85
+ os.makedirs(MODEL_DIR, exist_ok=True)
86
+
87
+ # Download model if it's not already there
88
+ model_path = os.path.join(MODEL_DIR, MODEL_NAME)
89
+ if not os.path.isfile(model_path):
90
+ print(f"Downloading model from {REPO_ID}...")
91
+ # Download to a subdirectory to avoid file conflicts
92
+ download_dir = os.path.join(MODEL_DIR, "download_cache")
93
+ snapshot_download(
94
+ repo_id=REPO_ID,
95
+ repo_type="model",
96
+ local_dir=download_dir,
97
+ local_dir_use_symlinks=False
98
+ )
99
+
100
+ # Move the specific model file we want to models/
101
+ src_path = os.path.join(download_dir, MODEL_NAME)
102
+ if os.path.exists(src_path):
103
+ import shutil
104
+ shutil.move(src_path, model_path)
105
+ print(f"Moved model to {model_path}")
106
+ else:
107
+ raise ValueError(f"Downloaded files but couldn't find {MODEL_NAME}")
108
+ else:
109
+ print(f"Model already exists at {model_path}")
110
+
111
+ return model_path
112
+
113
+ # Ensure model is downloaded
114
+ model_path = ensure_model()
115
 
116
  # Configure embeddings globally
117
  Settings.embed_model = HuggingFaceEmbedding(
 
120
 
121
  # Configure local LLM with LlamaCPP
122
  llm = LlamaCPP(
123
+ model_path=model_path,
 
124
  temperature=0.7,
125
  max_new_tokens=256,
126
  context_window=2048