Spaces:
Running
Really well done
Hello!
I tried to adapt the main app.py which uses the gated models myself from
@cella110n
with the help of AI (cuz I'm too noob at coding) and could get past an error with something like "different mapping method".
Thanks for making this work out! Although it gives a timeout error on this space (I assume it's because the space doesn't use a GPU), I cloned the repo on a T4 GPU and it works very well: Inference completed in 0.652 seconds (less than 1s for tagging an image).
Congrats!
If it's of any help, here I modified the app to add batch process capabilities - https://huggingface.co/TekeshiX/cl_tagger/blob/main/app.py
So it seems that it doesn't work on CPU no matter what, gemini said it's because of this (I'm no expert):
"When running a large model on a CPU, the inference time can be very long (sometimes 30-60 seconds or more per image). Most web servers, including the one Gradio uses, will terminate a connection that is unresponsive for that long, leading to the "Broken Connection" error you're seeing. The application itself doesn't crash, but the connection between your browser and the backend process is severed.
The solution is to explicitly tell Gradio to handle these long-running tasks by enabling its queuing system. This changes the communication protocol to handle jobs that take a long time without breaking the connection.
Enable Gradio's Queue: add demo.queue() before launching the app. This is the most critical fix and is designed specifically to prevent timeouts on long-running tasks. You will now see an indicator showing that your job is "in queue" and processing."
I actually just made this space to verify that the only thing needed to be changed were the "constants" variables further down in the code and then reply to the original discussion thread you opened on the original repo but it was late already so i kinda forgot about it. But good to see you found this space by yourself
Regarding the timeout error, i thought it shouldn't really have any issues with running on cpu since its not a diffusion model or similar, and even my raspberry pi can run smilingwolf's wd eva02 tagger v3 onnx model (which cl tagger is based on) in ~10-15s per image and i doubt this vCPU thatHF provides for free tier is weaker so i didnt want to bother diagnosing the issue
should work on CPU now.
there was an @spaces.GPU() decorator which seems to have been waiting for the space to find a GPU instead of simply using cpu
also changed some other stuff that idk if they are beneficial or are worse for running on GPU.
You can see what exactly changed by looking at the 2 most recent commits first and second
should work on CPU now.
there was an @spaces.GPU() decorator which seems to have been waiting for the space to find a GPU instead of simply using cpu
also changed some other stuff that idk if they are beneficial or are worse for running on GPU.
You can see what exactly changed by looking at the 2 most recent commits first and second
Thanks, it works on CPU just fine now, well done!