Pujan Neupane commited on
Commit
5792ae0
·
1 Parent(s): 9cca434
Machine-learning/.gitattributes ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ *.pth filter=lfs diff=lfs merge=lfs -text
2
+ Ai-Text-Detector/model_weights.pth filter=lfs diff=lfs merge=lfs -text
Machine-learning/.gitignore ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ---- Python Environment ----
2
+ venv/
3
+ .venv/
4
+ env/
5
+ ENV/
6
+ *.pyc
7
+ *.pyo
8
+ *.pyd
9
+ __pycache__/
10
+ **/__pycache__/
11
+
12
+ # ---- VS Code / IDEs ----
13
+ .vscode/
14
+ .idea/
15
+ *.swp
16
+
17
+ # ---- Jupyter / IPython ----
18
+ .ipynb_checkpoints/
19
+ *.ipynb
20
+
21
+ # ---- Model & Data Artifacts ----
22
+ *.pth
23
+ *.pt
24
+ *.h5
25
+ *.ckpt
26
+ *.onnx
27
+ *.joblib
28
+ *.pkl
29
+
30
+ # ---- Hugging Face Cache ----
31
+ ~/.cache/huggingface/
32
+ huggingface_cache/
33
+
34
+ # ---- Logs and Dumps ----
35
+ *.log
36
+ *.out
37
+ *.err
38
+
39
+ # ---- Build Artifacts ----
40
+ build/
41
+ dist/
42
+ *.egg-info/
43
+
44
+ # ---- System Files ----
45
+ .DS_Store
46
+ Thumbs.db
47
+
48
+ # ---- Environment Configs ----
49
+ .env
50
+ .env.*
51
+
52
+ # ---- Project-specific ----
53
+ Ai-Text-Detector/
54
+ HuggingFace/model/
55
+ # ---- Node Projects (if applicable) ----
56
+ node_modules/
57
+
Machine-learning/HuggingFace/main.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from huggingface_hub import Repository
3
+
4
+
5
+ def download_repo():
6
+ hf_token = os.getenv("HF_TOKEN")
7
+ if not hf_token:
8
+ raise ValueError("HF_TOKEN not found in environment variables.")
9
+
10
+ repo_id = "Pujan-Dev/test"
11
+ local_dir = "../Ai-Text-Detector/"
12
+
13
+ repo = Repository(local_dir, clone_from=repo_id, token=hf_token)
14
+ print(f"Repository downloaded to: {local_dir}")
15
+
16
+
17
+ if __name__ == "__main__":
18
+ download_repo()
Machine-learning/HuggingFace/readme.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### Hugging Face CLI Tool
2
+
3
+ This CLI tool allows you to **upload** and **download** models from Hugging Face repositories. It requires an **Hugging Face Access Token (`HF_TOKEN`)** for authentication, especially for private repositories.
4
+
5
+ ### Prerequisites
6
+
7
+ 1. **Install Hugging Face Hub**:
8
+
9
+ ```bash
10
+ pip install huggingface_hub
11
+ ```
12
+
13
+ 2. **Get HF_TOKEN**:
14
+ - Log in to [Hugging Face](https://huggingface.co/).
15
+ - Go to **Settings** → **Access Tokens** → **Create a new token** with `read` and `write` permissions.
16
+ - Save the token.
17
+
18
+ ### Usage
19
+
20
+ 1. **Set the Token**:
21
+
22
+ - **Linux/macOS**:
23
+ ```bash
24
+ export HF_TOKEN=your_token_here
25
+ ```
26
+ - **Windows (CMD)**:
27
+ ```bash
28
+ set HF_TOKEN=your_token_here
29
+ ```
30
+
31
+ 2. **Download Model**:
32
+
33
+ ```bash
34
+ python main.py --download --repo-id <repo_name> --save-dir <local_save_path>
35
+ ```
36
+
37
+ 3. **Upload Model**:
38
+ ```bash
39
+ python main.py --upload --repo-id <repo_name> --model-path <local_model_path>
40
+ ```
41
+
42
+ ### Example
43
+
44
+ To download a model:
45
+
46
+ ```bash
47
+ python main.py
48
+ ```
49
+
50
+ ### Authentication
51
+
52
+ Ensure you set `HF_TOKEN` to access private repositories. If not set, the script will raise an error.
53
+ Here’s a clearer and more polished version of that note:
54
+
55
+ ---
56
+
57
+ ### ⚠️ Note
58
+
59
+ **Make sure to run this script from the `HuggingFace` directory to ensure correct path resolution and functionality.**
60
+
61
+ ---
Machine-learning/README.md ADDED
@@ -0,0 +1,289 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### **FastAPI AI**
2
+
3
+ This FastAPI app loads a GPT-2 model, tokenizes input text, classifies it, and returns whether the text is AI-generated or human-written.
4
+
5
+ ### **install Dependencies**
6
+
7
+ ```bash
8
+ pip install -r requirements.txt
9
+
10
+ ```
11
+
12
+ This command installs all the dependencies listed in the `requirements.txt` file. It ensures that your environment has the required packages to run the project smoothly.
13
+
14
+ **NOTE: IF YOU HAVE DONE ANY CHANGES DON'NT FORGOT TO PUT IT IN THE REQUIREMENTS.TXT USING `bash pip freeze > requirements.txt `**
15
+
16
+ ---
17
+
18
+ ### **Functions**
19
+
20
+ 1. **`load_model()`**
21
+ Loads the GPT-2 model and tokenizer from specified paths.
22
+
23
+ 2. **`lifespan()`**
24
+ Manages the app's lifecycle: loads the model at startup and handles cleanup on shutdown.
25
+
26
+ 3. **`classify_text_sync()`**
27
+ Synchronously tokenizes input text and classifies it using the GPT-2 model. Returns the classification and perplexity.
28
+
29
+ 4. **`classify_text()`**
30
+ Asynchronously executes `classify_text_sync()` in a thread pool to ensure non-blocking processing.
31
+
32
+ 5. **`analyze_text()`**
33
+ **POST** endpoint: accepts text input, classifies it using `classify_text()`, and returns the result with perplexity.
34
+
35
+ 6. **`health_check()`**
36
+ **GET** endpoint: simple health check to confirm the API is running.
37
+
38
+ ---
39
+
40
+ ### **Code Overview**
41
+
42
+ ```python
43
+ executor = ThreadPoolExecutor(max_workers=2)
44
+ ```
45
+
46
+ - **`ThreadPoolExecutor(max_workers=2)`** limits the number of concurrent threads (tasks) per worker process to 2 for text classification. This helps control resource usage and prevent overloading the server.
47
+
48
+ ---
49
+
50
+ ### **Running and Load Balancing:**
51
+
52
+ To run the app in production with load balancing:
53
+
54
+ ```bash
55
+ uvicorn app:app --host 0.0.0.0 --port 8000 --workers 4
56
+ ```
57
+
58
+ This command launches the FastAPI app with **4 worker processes**, allowing it to handle multiple requests concurrently.
59
+
60
+ ### **Concurrency Explained:**
61
+
62
+ 1. **`ThreadPoolExecutor(max_workers=20)`**
63
+
64
+ - Controls the **number of threads** within a **single worker** process.
65
+ - Allows up to 20 tasks (text classification requests) to be handled simultaneously per worker, improving responsiveness for I/O-bound tasks.
66
+
67
+ 2. **`--workers 4` in Uvicorn**
68
+ - Spawns **4 independent worker processes** to handle incoming HTTP requests.
69
+ - Each worker can independently handle multiple tasks, increasing the app's ability to process concurrent requests in parallel.
70
+
71
+ ### **How They Relate:**
72
+
73
+ - **Uvicorn’s `--workers`** defines how many worker processes the server will run.
74
+ - **`ThreadPoolExecutor`** limits how many tasks (threads) each worker can process concurrently.
75
+
76
+ For example, with **4 workers** and **20 threads per worker**, the server can handle **80 tasks concurrently**. This provides scalable and efficient processing, balancing the load across multiple workers and threads.
77
+
78
+ ### **Endpoints**
79
+
80
+ #### 1. **`/analyze`**
81
+
82
+ - **Method:** `POST`
83
+ - **Description:** Classifies whether the text is AI-generated or human-written.
84
+ - **Request:**
85
+ ```json
86
+ { "text": "sample text" }
87
+ ```
88
+ - **Response:**
89
+ ```json
90
+ { "result": "AI-generated", "perplexity": 55.67 }
91
+ ```
92
+
93
+ #### 2. **`/health`**
94
+
95
+ - **Method:** `GET`
96
+ - **Description:** Returns the status of the API.
97
+ - **Response:**
98
+ ```json
99
+ { "status": "ok" }
100
+ ```
101
+
102
+ ---
103
+
104
+ ### **Running the API**
105
+
106
+ Start the server with:
107
+
108
+ ```bash
109
+ uvicorn app:app --host 0.0.0.0 --port 8000 --workers 4
110
+ ```
111
+
112
+ ---
113
+
114
+ ### **🧪 Testing the API**
115
+
116
+ You can test the FastAPI endpoint using `curl` like this:
117
+
118
+ ```bash
119
+ curl -X POST http://127.0.0.1:8000/analyze \
120
+ -H "Authorization: Bearer HelloThere" \
121
+ -H "Content-Type: application/json" \
122
+ -d '{"text": "This is a sample sentence for analysis."}'
123
+ ```
124
+
125
+ - The `-H "Authorization: Bearer HelloThere"` part is used to simulate the **handshake**.
126
+ - FastAPI checks this token against the one loaded from the `.env` file.
127
+ - If the token matches, the request is accepted and processed.
128
+ - Otherwise, it responds with a `403 Unauthorized` error.
129
+
130
+ ---
131
+
132
+ ### **API Documentation**
133
+
134
+ - **Swagger UI:** `http://127.0.0.1:8000/docs` -> `/docs`
135
+ - **ReDoc:** `http://127.0.0.1:8000/redoc` -> `/redoc`
136
+
137
+ ### **🔐 Handshake Mechanism**
138
+
139
+ In this part, we're implementing a simple handshake to verify that the request is coming from a trusted source (e.g., our NestJS server). Here's how it works:
140
+
141
+ - We load a secret token from the `.env` file.
142
+ - When a request is made to the FastAPI server, we extract the `Authorization` header and compare it with our expected secret token.
143
+ - If the token does **not** match, we immediately return a **403 Forbidden** response with the message `"Unauthorized"`.
144
+ - If the token **does** match, we allow the request to proceed to the next step.
145
+
146
+ The verification function looks like this:
147
+
148
+ ```python
149
+ def verify_token(auth: str):
150
+ if auth != f"Bearer {EXPECTED_TOKEN}":
151
+ raise HTTPException(status_code=403, detail="Unauthorized")
152
+ ```
153
+
154
+ This provides a basic but effective layer of security to prevent unauthorized access to the API.
155
+
156
+ ### **Implement it with NEST.js**
157
+
158
+ NOTE: Make an micro service in NEST.JS and implement it there and call it from app.controller.ts
159
+
160
+ in fastapi.service.ts file what we have done is
161
+
162
+ ### Project Structure
163
+
164
+ ```files
165
+ nestjs-fastapi-bridge/
166
+ ├── src/
167
+ │ ├── app.controller.ts
168
+ │ ├── app.module.ts
169
+ │ └── fastapi.service.ts
170
+ ├── .env
171
+
172
+ ```
173
+
174
+ ---
175
+
176
+ ### Step-by-Step Setup
177
+
178
+ #### 1. `.env`
179
+
180
+ Create a `.env` file at the root with the following:
181
+
182
+ ```environment
183
+ FASTAPI_BASE_URL=http://localhost:8000
184
+ SECRET_TOKEN="HelloThere"
185
+ ```
186
+
187
+ #### 2. `fastapi.service.ts`
188
+
189
+ ```javascript
190
+ // src/fastapi.service.ts
191
+ import { Injectable } from "@nestjs/common";
192
+ import { HttpService } from "@nestjs/axios";
193
+ import { ConfigService } from "@nestjs/config";
194
+ import { firstValueFrom } from "rxjs";
195
+
196
+ @Injectable()
197
+ export class FastAPIService {
198
+ constructor(
199
+ private http: HttpService,
200
+ private config: ConfigService,
201
+ ) {}
202
+
203
+ async analyzeText(text: string) {
204
+ const url = `${this.config.get("FASTAPI_BASE_URL")}/analyze`;
205
+ const token = this.config.get("SECRET_TOKEN");
206
+
207
+ const response = await firstValueFrom(
208
+ this.http.post(
209
+ url,
210
+ { text },
211
+ {
212
+ headers: {
213
+ Authorization: `Bearer ${token}`,
214
+ },
215
+ },
216
+ ),
217
+ );
218
+
219
+ return response.data;
220
+ }
221
+ }
222
+ ```
223
+
224
+ #### 3. `app.module.ts`
225
+
226
+ ```javascript
227
+ // src/app.module.ts
228
+ import { Module } from "@nestjs/common";
229
+ import { ConfigModule } from "@nestjs/config";
230
+ import { HttpModule } from "@nestjs/axios";
231
+ import { AppController } from "./app.controller";
232
+ import { FastAPIService } from "./fastapi.service";
233
+
234
+ @Module({
235
+ imports: [ConfigModule.forRoot(), HttpModule],
236
+ controllers: [AppController],
237
+ providers: [FastAPIService],
238
+ })
239
+ export class AppModule {}
240
+ ```
241
+
242
+ ---
243
+
244
+ #### 4. `app.controller.ts`
245
+
246
+ ```javascript
247
+ // src/app.controller.ts
248
+ import { Body, Controller, Post, Get, Query } from '@nestjs/common';
249
+ import { FastAPIService } from './fastapi.service';
250
+
251
+ @Controller()
252
+ export class AppController {
253
+ constructor(private readonly fastapiService: FastAPIService) {}
254
+
255
+ @Post('analyze-text')
256
+ async callFastAPI(@Body('text') text: string) {
257
+ return this.fastapiService.analyzeText(text);
258
+ }
259
+
260
+ @Get()
261
+ getHello(): string {
262
+ return 'NestJS is connected to FastAPI ';
263
+ }
264
+ }
265
+ ```
266
+
267
+ ### 🚀 How to Run
268
+
269
+ Run the server of flask and nest.js:
270
+
271
+ - for nest.js
272
+ ```bash
273
+ npm run start
274
+ ```
275
+ - for Fastapi
276
+
277
+ ```bash
278
+ uvicorn app:app --reload
279
+ ```
280
+
281
+ Make sure your FastAPI service is running at `http://localhost:8000`.
282
+
283
+ ### Test with CURL
284
+
285
+ ```bash
286
+ curl -X POST http://localhost:3000/analyze-text \
287
+ -H 'Content-Type: application/json' \
288
+ -d '{"text": "This is a test input"}'
289
+ ```
Machine-learning/app.py ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from transformers import GPT2LMHeadModel, GPT2TokenizerFast
3
+ from fastapi import FastAPI, HTTPException, Header
4
+ from pydantic import BaseModel
5
+ import asyncio
6
+ from concurrent.futures import ThreadPoolExecutor
7
+ from contextlib import asynccontextmanager
8
+ from dotenv import dotenv_values
9
+
10
+ # FastAPI instance
11
+ app = FastAPI()
12
+ executor = ThreadPoolExecutor(max_workers=20)
13
+
14
+ # Load .env file
15
+ env = dotenv_values(".env")
16
+ EXPECTED_TOKEN = env.get("SECRET_TOKEN")
17
+
18
+ # Global variables for model and tokenizer
19
+ model, tokenizer = None, None
20
+
21
+ # Function to verify token
22
+
23
+
24
+ def verify_token(auth: str):
25
+ if auth != f"Bearer {EXPECTED_TOKEN}":
26
+ raise HTTPException(status_code=403, detail="Unauthorized")
27
+
28
+
29
+ # Function to load model and tokenizer
30
+
31
+
32
+ def load_model():
33
+ model_path = "./Ai-Text-Detector/model"
34
+ weights_path = "./Ai-Text-Detector/model_weights.pth"
35
+ tokenizer = GPT2TokenizerFast.from_pretrained(model_path)
36
+ model = GPT2LMHeadModel.from_pretrained("gpt2")
37
+ model.load_state_dict(torch.load(weights_path, map_location=torch.device("cpu")))
38
+ model.eval() # Set the model to evaluation mode
39
+ return model, tokenizer
40
+
41
+
42
+ @asynccontextmanager
43
+ async def lifespan(app: FastAPI):
44
+ global model, tokenizer
45
+ model, tokenizer = load_model()
46
+ yield
47
+
48
+
49
+ # Attach the lifespan context manager
50
+ app = FastAPI(lifespan=lifespan)
51
+
52
+ # Request body for input data
53
+
54
+
55
+ class TextInput(BaseModel):
56
+ text: str
57
+
58
+
59
+ # Sync function to classify text
60
+
61
+
62
+ def classify_text_sync(sentence: str):
63
+ inputs = tokenizer(sentence, return_tensors="pt", truncation=True, padding=True)
64
+ input_ids = inputs["input_ids"]
65
+ attention_mask = inputs["attention_mask"]
66
+
67
+ with torch.no_grad():
68
+ outputs = model(input_ids, attention_mask=attention_mask, labels=input_ids)
69
+ loss = outputs.loss
70
+ perplexity = torch.exp(loss).item()
71
+
72
+ if perplexity < 60:
73
+ result = "AI-generated*"
74
+ elif perplexity < 80:
75
+ result = "Probably AI-generated*"
76
+ else:
77
+ result = "Human-written*"
78
+
79
+ return result, perplexity
80
+
81
+
82
+ # Async wrapper for text classification
83
+
84
+
85
+ async def classify_text(sentence: str):
86
+ loop = asyncio.get_event_loop()
87
+ return await loop.run_in_executor(executor, classify_text_sync, sentence)
88
+
89
+
90
+ # POST route to analyze text
91
+
92
+
93
+ @app.post("/analyze")
94
+ async def analyze_text(data: TextInput, authorization: str = Header(default="")):
95
+ verify_token(authorization) # Token verification
96
+ user_input = data.text.strip()
97
+
98
+ if not user_input:
99
+ raise HTTPException(status_code=400, detail="Text cannot be empty")
100
+
101
+ result, perplexity = await classify_text(user_input)
102
+
103
+ return {
104
+ "result": result,
105
+ "perplexity": round(perplexity, 2),
106
+ }
107
+
108
+
109
+ # Health check route
110
+
111
+
112
+ @app.get("/health")
113
+ async def health_check():
114
+ return {"status": "ok"}
115
+
116
+
117
+ # Simple index route
118
+
119
+
120
+ @app.get("/")
121
+ def index():
122
+ return {"message": "It's an API"}
123
+
124
+
125
+ # Start the app (run with uvicorn)
126
+ if __name__ == "__main__":
127
+ import uvicorn
128
+
129
+ uvicorn.run("main:app", host="0.0.0.0", port=8000, workers=4)
Machine-learning/requirements.txt ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ absl-py==2.2.2
2
+ accelerate==1.6.0
3
+ aiohappyeyeballs==2.6.1
4
+ aiohttp==3.11.16
5
+ aiosignal==1.3.2
6
+ altair==5.5.0
7
+ annotated-types==0.7.0
8
+ anyio==4.9.0
9
+ argon2-cffi==23.1.0
10
+ argon2-cffi-bindings==21.2.0
11
+ arrow==1.3.0
12
+ asgiref==3.8.1
13
+ asttokens==3.0.0
14
+ async-lru==2.0.5
15
+ attrs==25.3.0
16
+ babel==2.17.0
17
+ beautifulsoup4==4.13.4
18
+ bleach==6.2.0
19
+ blinker==1.9.0
20
+ cachetools==5.5.2
21
+ certifi==2025.1.31
22
+ cffi==1.17.1
23
+ charset-normalizer==2.1.1
24
+ click==8.1.8
25
+ comm==0.2.2
26
+ contourpy==1.3.1
27
+ cycler==0.12.1
28
+ datasets==3.5.0
29
+ DateTime==4.7
30
+ debugpy==1.8.13
31
+ decorator==5.2.1
32
+ defusedxml==0.7.1
33
+ dill==0.3.8
34
+ Django==5.2
35
+ dotenv==0.9.9
36
+ executing==2.2.0
37
+ fastapi==0.115.12
38
+ fastjsonschema==2.21.1
39
+ filelock==3.13.1
40
+ Flask==3.1.0
41
+ flask-cors==5.0.1
42
+ fonttools==4.56.0
43
+ fqdn==1.5.1
44
+ frozenlist==1.6.0
45
+ fsspec==2024.6.1
46
+ generativeai==0.0.1
47
+ gitdb==4.0.12
48
+ GitPython==3.1.44
49
+ google-ai-generativelanguage==0.6.15
50
+ google-api-core==2.24.2
51
+ google-api-python-client==2.165.0
52
+ google-auth==2.38.0
53
+ google-auth-httplib2==0.2.0
54
+ google-genai==1.7.0
55
+ google-generativeai==0.8.4
56
+ googleapis-common-protos==1.69.2
57
+ grpcio==1.71.0
58
+ grpcio-status==1.71.0
59
+ h11==0.14.0
60
+ h5py==3.13.0
61
+ html5lib==1.1
62
+ httpcore==1.0.7
63
+ httplib2==0.22.0
64
+ httpx==0.28.1
65
+ huggingface-hub==0.30.2
66
+ idna==3.10
67
+ inquirerpy==0.3.4
68
+ ipykernel==6.29.5
69
+ ipython==9.0.2
70
+ ipython_pygments_lexers==1.1.1
71
+ isoduration==20.11.0
72
+ itsdangerous==2.2.0
73
+ jedi==0.19.2
74
+ Jinja2==3.1.4
75
+ joblib==1.4.2
76
+ json5==0.12.0
77
+ jsonpointer==3.0.0
78
+ jsonschema==4.23.0
79
+ jsonschema-specifications==2024.10.1
80
+ jupyter-events==0.12.0
81
+ jupyter-lsp==2.2.5
82
+ jupyter_client==8.6.3
83
+ jupyter_core==5.7.2
84
+ jupyter_server==2.15.0
85
+ jupyter_server_terminals==0.5.3
86
+ jupyterlab==4.4.0
87
+ jupyterlab_pygments==0.3.0
88
+ jupyterlab_server==2.27.3
89
+ keras==3.9.2
90
+ kiwisolver==1.4.8
91
+ markdown-it-py==3.0.0
92
+ MarkupSafe==3.0.2
93
+ matplotlib==3.10.1
94
+ matplotlib-inline==0.1.7
95
+ mdurl==0.1.2
96
+ mechanize==0.4.10
97
+ mistune==3.1.3
98
+ ml_dtypes==0.5.1
99
+ mpmath==1.3.0
100
+ multidict==6.4.3
101
+ multiprocess==0.70.16
102
+ namex==0.0.8
103
+ narwhals==1.35.0
104
+ nbclient==0.10.2
105
+ nbconvert==7.16.6
106
+ nbformat==5.10.4
107
+ nest-asyncio==1.6.0
108
+ networkx==3.3
109
+ notebook==7.4.0
110
+ notebook_shim==0.2.4
111
+ numpy==2.2.4
112
+ nvidia-cublas-cu11==11.11.3.6
113
+ nvidia-cuda-cupti-cu11==11.8.87
114
+ nvidia-cuda-nvrtc-cu11==11.8.89
115
+ nvidia-cuda-runtime-cu11==11.8.89
116
+ nvidia-cudnn-cu11==9.1.0.70
117
+ nvidia-cufft-cu11==10.9.0.58
118
+ nvidia-curand-cu11==10.3.0.86
119
+ nvidia-cusolver-cu11==11.4.1.48
120
+ nvidia-cusparse-cu11==11.7.5.86
121
+ nvidia-nccl-cu11==2.21.5
122
+ nvidia-nvtx-cu11==11.8.86
123
+ optree==0.15.0
124
+ overrides==7.7.0
125
+ packaging==24.2
126
+ pandas==2.2.3
127
+ pandocfilters==1.5.1
128
+ parso==0.8.4
129
+ pexpect==4.9.0
130
+ pfzy==0.3.4
131
+ pillow==11.1.0
132
+ platformdirs==4.3.7
133
+ prometheus_client==0.21.1
134
+ prompt_toolkit==3.0.50
135
+ propcache==0.3.1
136
+ proto-plus==1.26.1
137
+ protobuf==5.29.4
138
+ psutil==7.0.0
139
+ ptyprocess==0.7.0
140
+ pure_eval==0.2.3
141
+ pyarrow==19.0.1
142
+ pyasn1==0.6.1
143
+ pyasn1_modules==0.4.1
144
+ pycparser==2.22
145
+ pydantic==2.10.6
146
+ pydantic_core==2.27.2
147
+ pydeck==0.9.1
148
+ pygame==2.6.1
149
+ Pygments==2.19.1
150
+ pyparsing==3.2.2
151
+ pystyle==2.0
152
+ python-dateutil==2.9.0.post0
153
+ python-dotenv==1.1.0
154
+ python-json-logger==3.3.0
155
+ pytz==2025.1
156
+ PyYAML==6.0.2
157
+ pyzmq==26.3.0
158
+ referencing==0.36.2
159
+ regex==2024.11.6
160
+ requests==2.32.3
161
+ rfc3339-validator==0.1.4
162
+ rfc3986-validator==0.1.1
163
+ rich==14.0.0
164
+ rpds-py==0.24.0
165
+ rsa==4.9
166
+ safetensors==0.5.3
167
+ scikit-learn==1.6.1
168
+ scipy==1.15.2
169
+ seaborn==0.13.2
170
+ Send2Trash==1.8.3
171
+ setuptools==70.2.0
172
+ six==1.17.0
173
+ smmap==5.0.2
174
+ sniffio==1.3.1
175
+ soupsieve==2.6
176
+ sqlparse==0.5.3
177
+ stack-data==0.6.3
178
+ starlette==0.46.2
179
+ streamlit==1.44.1
180
+ sympy==1.13.1
181
+ tenacity==9.1.2
182
+ terminado==0.18.1
183
+ threadpoolctl==3.6.0
184
+ tinycss2==1.4.0
185
+ tokenizers==0.21.1
186
+ toml==0.10.2
187
+ torch==2.6.0+cu118
188
+ torchaudio==2.6.0+cu118
189
+ torchvision==0.21.0+cu118
190
+ tornado==6.4.2
191
+ tqdm==4.67.1
192
+ traitlets==5.14.3
193
+ transformers==4.51.3
194
+ triton==3.2.0
195
+ types-python-dateutil==2.9.0.20241206
196
+ typing_extensions==4.12.2
197
+ tzdata==2025.2
198
+ uri-template==1.3.0
199
+ uritemplate==4.1.1
200
+ urllib3==1.26.20
201
+ watchdog==6.0.0
202
+ wcwidth==0.2.13
203
+ webcolors==24.11.1
204
+ webencodings==0.5.1
205
+ websocket-client==1.8.0
206
+ websockets==15.0.1
207
+ Werkzeug==3.1.3
208
+ xxhash==3.5.0
209
+ yarl==1.20.0
210
+ zope.interface==7.2
Machine-learning/test.sh ADDED
@@ -0,0 +1 @@
 
 
1
+ echo "ok"