rajkumarrawal commited on
Commit
6f9b46a
·
1 Parent(s): cd112fe

fix: resolve meta tensor issues by forcing CPU device mapping

Browse files

Updated model loading in app.py to explicitly use CPU device with torch_dtype=torch.float32 and device_map=None to prevent meta tensor issues during inference. The model is now forcibly moved to CPU device after loading to ensure consistent behavior across different environments.

Files changed (1) hide show
  1. app.py +11 -2
app.py CHANGED
@@ -7,9 +7,18 @@ from io import BytesIO
7
 
8
  fashion_items = ['top', 'trousers', 'jumper']
9
 
10
- # Load model and processor
11
  model_name = 'Marqo/marqo-fashionSigLIP'
12
- model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
 
 
 
 
 
 
 
 
 
13
  processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
14
 
15
  # Preprocess and normalize text data
 
7
 
8
  fashion_items = ['top', 'trousers', 'jumper']
9
 
10
+ # Load model and processor with CPU device to avoid meta tensor issues
11
  model_name = 'Marqo/marqo-fashionSigLIP'
12
+
13
+ # Force CPU usage to avoid device mapping issues
14
+ device = torch.device('cpu')
15
+ model = AutoModel.from_pretrained(
16
+ model_name,
17
+ trust_remote_code=True,
18
+ torch_dtype=torch.float32,
19
+ device_map=None # This forces the model to stay on CPU
20
+ ).to(device)
21
+
22
  processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
23
 
24
  # Preprocess and normalize text data