2025-05-29 14:50:28,409 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 14:50:28,409 - __main__ - DEBUG - API key found, length: 39 2025-05-29 14:50:28,409 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 14:50:28,409 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 14:50:28,409 - auto_diffusers - INFO - Successfully configured Gemini AI model 2025-05-29 14:50:28,409 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 14:50:28,409 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 14:50:28,409 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 14:50:28,409 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 14:50:28,409 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 14:50:28,413 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 14:50:28,413 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 14:50:28,856 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 14:50:28,856 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 14:50:28,856 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 14:50:28,856 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 14:50:28,856 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 14:50:28,856 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 14:50:28,856 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 14:50:28,856 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 14:50:28,856 - __main__ - ERROR - Failed to initialize SimpleMemoryCalculator: 'SimpleMemoryCalculator' object has no attribute 'known_models' 2025-05-29 14:52:16,109 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 14:52:16,109 - __main__ - DEBUG - API key found, length: 39 2025-05-29 14:52:16,109 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 14:52:16,109 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 14:52:16,109 - auto_diffusers - INFO - Successfully configured Gemini AI model 2025-05-29 14:52:16,109 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 14:52:16,109 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 14:52:16,109 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 14:52:16,109 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 14:52:16,109 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 14:52:16,113 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 14:52:16,113 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 14:52:16,551 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 14:52:16,551 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 14:52:16,551 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 14:52:16,551 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 14:52:16,551 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 14:52:16,551 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 14:52:16,551 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 14:52:16,551 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 14:52:16,551 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 14:52:16,551 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 14:52:16,551 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 14:52:16,553 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 14:52:16,566 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 14:52:16,572 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 14:52:16,648 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 14:52:16,683 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 14:52:16,684 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 14:52:16,684 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 14:52:16,684 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 14:52:16,684 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 14:52:16,684 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 14:52:16,685 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 14:52:16,685 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 05:52:16 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 14:52:16,685 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 14:52:16,685 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 14:52:16,685 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 14:52:16,685 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 14:52:16,685 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 14:52:16,685 - httpcore.connection - DEBUG - close.started 2025-05-29 14:52:16,685 - httpcore.connection - DEBUG - close.complete 2025-05-29 14:52:16,686 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 14:52:16,686 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 14:52:16,686 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 14:52:16,686 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 14:52:16,686 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 14:52:16,686 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 14:52:16,686 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 14:52:16,692 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 05:52:16 GMT'), (b'server', b'uvicorn'), (b'content-length', b'73070'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 14:52:16,692 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 14:52:16,692 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 14:52:16,692 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 14:52:16,692 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 14:52:16,692 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 14:52:16,692 - httpcore.connection - DEBUG - close.started 2025-05-29 14:52:16,692 - httpcore.connection - DEBUG - close.complete 2025-05-29 14:52:16,703 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 14:52:16,842 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 14:52:16,842 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 14:52:16,852 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 14:52:16,861 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 14:52:16,861 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 14:52:17,179 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 14:52:17,179 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 14:52:17,180 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 14:52:17,180 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 14:52:17,180 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 14:52:17,180 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 14:52:17,182 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 14:52:17,182 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 14:52:17,182 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 14:52:17,182 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 14:52:17,183 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 14:52:17,183 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 14:52:17,340 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 05:52:17 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 14:52:17,340 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 14:52:17,340 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 14:52:17,340 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 14:52:17,340 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 14:52:17,340 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 14:52:17,340 - httpcore.connection - DEBUG - close.started 2025-05-29 14:52:17,340 - httpcore.connection - DEBUG - close.complete 2025-05-29 14:52:17,354 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 05:52:17 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 14:52:17,355 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 14:52:17,355 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 14:52:17,355 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 14:52:17,355 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 14:52:17,355 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 14:52:17,355 - httpcore.connection - DEBUG - close.started 2025-05-29 14:52:17,355 - httpcore.connection - DEBUG - close.complete 2025-05-29 14:52:18,360 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 14:52:18,573 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 15:59:34,212 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 15:59:34,212 - __main__ - DEBUG - API key found, length: 39 2025-05-29 15:59:34,212 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 15:59:34,212 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 15:59:34,212 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 15:59:34,212 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 15:59:34,212 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 15:59:34,212 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 15:59:34,212 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 15:59:34,212 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 15:59:34,216 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 15:59:34,216 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 15:59:34,645 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 15:59:34,646 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 15:59:34,646 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 15:59:34,646 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 15:59:34,646 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 15:59:34,646 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 15:59:34,646 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 15:59:34,646 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 15:59:34,646 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 15:59:34,646 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 15:59:34,646 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 15:59:34,648 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 15:59:34,661 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 15:59:34,667 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 15:59:34,749 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 15:59:34,784 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 15:59:34,785 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 15:59:34,785 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 15:59:34,785 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 15:59:34,785 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 15:59:34,785 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 15:59:34,785 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 15:59:34,785 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 06:59:34 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 15:59:34,786 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 15:59:34,786 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 15:59:34,786 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 15:59:34,786 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 15:59:34,786 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 15:59:34,786 - httpcore.connection - DEBUG - close.started 2025-05-29 15:59:34,786 - httpcore.connection - DEBUG - close.complete 2025-05-29 15:59:34,786 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 15:59:34,787 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 15:59:34,787 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 15:59:34,787 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 15:59:34,787 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 15:59:34,787 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 15:59:34,787 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 15:59:34,792 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 06:59:34 GMT'), (b'server', b'uvicorn'), (b'content-length', b'73058'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 15:59:34,792 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 15:59:34,792 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 15:59:34,792 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 15:59:34,792 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 15:59:34,792 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 15:59:34,793 - httpcore.connection - DEBUG - close.started 2025-05-29 15:59:34,793 - httpcore.connection - DEBUG - close.complete 2025-05-29 15:59:34,803 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 15:59:34,825 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 15:59:34,825 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 15:59:34,940 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 15:59:34,940 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 15:59:34,971 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 15:59:35,099 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 15:59:35,099 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 15:59:35,100 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 15:59:35,100 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 15:59:35,100 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 15:59:35,100 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 15:59:35,222 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 15:59:35,222 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 15:59:35,223 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 15:59:35,223 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 15:59:35,223 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 15:59:35,223 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 15:59:35,237 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 06:59:35 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 15:59:35,238 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 15:59:35,238 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 15:59:35,238 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 15:59:35,238 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 15:59:35,238 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 15:59:35,238 - httpcore.connection - DEBUG - close.started 2025-05-29 15:59:35,239 - httpcore.connection - DEBUG - close.complete 2025-05-29 15:59:35,362 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 06:59:35 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 15:59:35,363 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 15:59:35,363 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 15:59:35,364 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 15:59:35,364 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 15:59:35,364 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 15:59:35,365 - httpcore.connection - DEBUG - close.started 2025-05-29 15:59:35,365 - httpcore.connection - DEBUG - close.complete 2025-05-29 15:59:36,012 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 15:59:36,260 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 16:02:12,960 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:12,961 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:12,961 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 16:02:12,961 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:02:12,961 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:12,962 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:12,962 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:02:12,962 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:12,962 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:32,785 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:32,785 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:32,785 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:02:32,785 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:32,785 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:32,786 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:02:32,786 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:32,786 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:47,460 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:47,460 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:47,460 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:02:47,460 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:47,460 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:47,460 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:02:47,460 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:47,460 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,300 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,300 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,300 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:02:52,300 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,300 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,300 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:02:52,300 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,300 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,452 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,452 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,452 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:02:52,452 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,452 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,452 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:02:52,452 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:52,452 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:54,804 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:54,804 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:54,804 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:02:54,804 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:54,804 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:54,804 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:02:54,804 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:54,804 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:54,804 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:02:54,804 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-29 16:02:54,804 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-29 16:02:54,804 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-29 16:02:54,804 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': [{'name': 'RTX 5090', 'memory_mb': 32768}]} 2025-05-29 16:02:54,804 - auto_diffusers - DEBUG - GPU detected with 32.0 GB VRAM 2025-05-29 16:02:54,804 - auto_diffusers - INFO - Selected optimization profile: performance 2025-05-29 16:02:54,804 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-29 16:02:54,804 - auto_diffusers - DEBUG - Prompt length: 3456 characters 2025-05-29 16:02:54,804 - auto_diffusers - INFO - Sending request to Gemini API with tool calling enabled 2025-05-29 16:03:10,966 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-29 16:03:10,966 - auto_diffusers - DEBUG - Response length: 1710 characters 2025-05-29 16:08:00,894 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 16:08:00,894 - __main__ - DEBUG - API key found, length: 39 2025-05-29 16:08:00,894 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 16:08:00,894 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 16:08:00,894 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 16:08:00,894 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 16:08:00,894 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 16:08:00,894 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 16:08:00,894 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 16:08:00,894 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 16:08:00,898 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 16:08:00,898 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 16:08:01,310 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 16:08:01,310 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 16:08:01,310 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 16:08:01,310 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 16:08:01,310 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 16:08:01,310 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 16:08:01,310 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 16:08:01,310 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 16:08:01,310 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 16:08:01,310 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 16:08:01,310 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 16:08:01,312 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 16:08:01,325 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 16:08:01,325 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 16:08:01,404 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 16:08:01,439 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 16:08:01,440 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:08:01,440 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:08:01,440 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:08:01,440 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:08:01,441 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:08:01,441 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:08:01,441 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 07:08:01 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 16:08:01,441 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 16:08:01,441 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:08:01,441 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:08:01,441 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:08:01,441 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:08:01,441 - httpcore.connection - DEBUG - close.started 2025-05-29 16:08:01,441 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:08:01,442 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 16:08:01,442 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:08:01,442 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:08:01,442 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:08:01,442 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:08:01,442 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:08:01,442 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:08:01,447 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 07:08:01 GMT'), (b'server', b'uvicorn'), (b'content-length', b'73065'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 16:08:01,448 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 16:08:01,448 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:08:01,448 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:08:01,448 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:08:01,448 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:08:01,448 - httpcore.connection - DEBUG - close.started 2025-05-29 16:08:01,448 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:08:01,459 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 16:08:01,611 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 16:08:01,746 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:08:01,746 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 16:08:01,764 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:08:01,764 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 16:08:02,034 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 16:08:02,035 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:08:02,035 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:08:02,036 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:08:02,036 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:08:02,036 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:08:02,101 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 16:08:02,101 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:08:02,101 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:08:02,101 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:08:02,101 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:08:02,101 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:08:02,186 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 07:08:02 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 16:08:02,186 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 16:08:02,187 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:08:02,187 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:08:02,187 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:08:02,187 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:08:02,187 - httpcore.connection - DEBUG - close.started 2025-05-29 16:08:02,187 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:08:02,272 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 07:08:02 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 16:08:02,273 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 16:08:02,273 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:08:02,273 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:08:02,273 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:08:02,273 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:08:02,273 - httpcore.connection - DEBUG - close.started 2025-05-29 16:08:02,273 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:08:02,845 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 16:08:03,061 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 16:08:25,941 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:25,942 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:25,942 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 16:08:25,942 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:08:25,942 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:25,942 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:25,942 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:08:25,942 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:25,943 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:30,760 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:30,761 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:30,761 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:08:30,761 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:30,761 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:30,761 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:08:30,761 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:30,761 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,477 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,477 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,478 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:08:37,478 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,478 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,478 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:08:37,480 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,480 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,527 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,527 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,527 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:08:37,527 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,527 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,527 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:08:37,527 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:37,527 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:39,349 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:39,350 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:39,350 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:08:39,350 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:39,350 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:39,351 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:08:39,351 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:39,351 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:39,351 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:08:39,351 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-29 16:08:39,351 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-29 16:08:39,351 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-29 16:08:39,351 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': [{'name': 'RTX 5090', 'memory_mb': 32768}]} 2025-05-29 16:08:39,352 - auto_diffusers - DEBUG - GPU detected with 32.0 GB VRAM 2025-05-29 16:08:39,352 - auto_diffusers - INFO - Selected optimization profile: performance 2025-05-29 16:08:39,352 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-29 16:08:39,352 - auto_diffusers - DEBUG - Prompt length: 3456 characters 2025-05-29 16:08:39,352 - auto_diffusers - INFO - ================================================================================ 2025-05-29 16:08:39,352 - auto_diffusers - INFO - PROMPT SENT TO GEMINI API: 2025-05-29 16:08:39,352 - auto_diffusers - INFO - ================================================================================ 2025-05-29 16:08:39,352 - auto_diffusers - INFO - You are an expert in optimizing diffusers library code for different hardware configurations. NOTE: Advanced tool calling features are available when dependencies are installed. TASK: Generate optimized Python code for running a diffusion model with the following specifications: - Model: black-forest-labs/FLUX.1-schnell - Prompt: "A cat holding a sign that says hello world" - Image size: 768x1360 - Inference steps: 4 HARDWARE SPECIFICATIONS: - Platform: Linux (manual_input) - CPU Cores: 8 - CUDA Available: False - MPS Available: False - Optimization Profile: performance - GPU: RTX 5090 (32.0 GB VRAM) MEMORY ANALYSIS: - Model Memory Requirements: 36.0 GB (FP16 inference) - Model Weights Size: 24.0 GB (FP16) - Memory Recommendation: ⚠️ Model weights fit, enable memory optimizations - Recommended Precision: float16 - Attention Slicing Recommended: True - VAE Slicing Recommended: True OPTIMIZATION REQUIREMENTS: Please scrape and analyze the latest optimization techniques from this URL: https://huggingface.co/docs/diffusers/main/en/optimization IMPORTANT: For FLUX.1-schnell models, do NOT include guidance_scale parameter as it's not needed. Based on the hardware specs and optimization profile, generate Python code that includes: 1. **Memory Optimizations** (if low VRAM): - Model offloading (enable_model_cpu_offload, enable_sequential_cpu_offload) - Attention slicing (enable_attention_slicing) - VAE slicing (enable_vae_slicing) - Memory efficient attention 2. **Speed Optimizations**: - Appropriate torch.compile() usage - Optimal dtype selection (torch.float16, torch.bfloat16) - Device placement optimization 3. **Hardware-Specific Optimizations**: - CUDA optimizations for NVIDIA GPUs - MPS optimizations for Apple Silicon - CPU fallbacks when needed 4. **Model-Specific Optimizations**: - Appropriate scheduler selection - Optimal inference parameters - Pipeline configuration 5. **Data Type (dtype) Selection**: - If user specified a dtype, use that exact dtype in the code - If no dtype specified, automatically select the optimal dtype based on hardware: * Apple Silicon (MPS): prefer torch.bfloat16 * NVIDIA GPUs: prefer torch.float16 or torch.bfloat16 based on capability * CPU only: use torch.float32 - Add a comment explaining why that dtype was chosen IMPORTANT GUIDELINES: - Include all necessary imports - Add brief comments explaining optimization choices - Use the most current and effective optimization techniques - Ensure code is production-ready CODE STYLE REQUIREMENTS - GENERATE COMPACT CODE: - Assign static values directly to function arguments instead of using variables when possible - Minimize variable declarations - inline values where it improves readability - Reduce exception handling to essential cases only - assume normal operation - Use concise, direct code patterns - Combine operations where logical and readable - Avoid unnecessary intermediate variables - Keep code clean and minimal while maintaining functionality Examples of preferred compact style: - pipe = Pipeline.from_pretrained("model", torch_dtype=torch.float16) instead of storing dtype in variable - image = pipe("prompt", num_inference_steps=4, height=768, width=1360) instead of separate variables - Direct assignment: device = "cuda" if torch.cuda.is_available() else "cpu" Generate ONLY the Python code, no explanations before or after the code block. 2025-05-29 16:08:39,353 - auto_diffusers - INFO - ================================================================================ 2025-05-29 16:08:39,353 - auto_diffusers - INFO - Sending request to Gemini API 2025-05-29 16:08:54,821 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-29 16:08:54,821 - auto_diffusers - DEBUG - Response length: 2512 characters 2025-05-29 16:16:14,940 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 16:16:14,940 - __main__ - DEBUG - API key found, length: 39 2025-05-29 16:16:14,940 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 16:16:14,940 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 16:16:14,940 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 16:16:14,940 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 16:16:14,940 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 16:16:14,940 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 16:16:14,940 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 16:16:14,940 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 16:16:14,943 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 16:16:14,943 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 16:16:15,359 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 16:16:15,359 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 16:16:15,359 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 16:16:15,359 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 16:16:15,359 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 16:16:15,359 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 16:16:15,359 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 16:16:15,359 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 16:16:15,359 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 16:16:15,359 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 16:16:15,359 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 16:16:15,362 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 16:16:15,374 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 16:16:15,381 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 16:16:15,454 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 16:16:15,488 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 16:16:15,489 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:16:15,489 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:16:15,489 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:16:15,489 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:16:15,489 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:16:15,489 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:16:15,490 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 07:16:15 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 16:16:15,490 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 16:16:15,490 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:16:15,490 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:16:15,490 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:16:15,490 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:16:15,490 - httpcore.connection - DEBUG - close.started 2025-05-29 16:16:15,490 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:16:15,490 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 16:16:15,491 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:16:15,491 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:16:15,491 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:16:15,491 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:16:15,491 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:16:15,491 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:16:15,496 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 07:16:15 GMT'), (b'server', b'uvicorn'), (b'content-length', b'73064'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 16:16:15,496 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 16:16:15,496 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:16:15,496 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:16:15,496 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:16:15,496 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:16:15,496 - httpcore.connection - DEBUG - close.started 2025-05-29 16:16:15,496 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:16:15,507 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 16:16:15,593 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:16:15,593 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 16:16:15,648 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:16:15,648 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 16:16:15,663 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 16:16:15,894 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 16:16:15,895 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:16:15,896 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:16:15,896 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:16:15,896 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:16:15,896 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:16:15,930 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 16:16:15,931 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:16:15,931 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:16:15,931 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:16:15,931 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:16:15,931 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:16:16,047 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 07:16:16 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 16:16:16,047 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 16:16:16,047 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:16:16,048 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:16:16,048 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:16:16,048 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:16:16,048 - httpcore.connection - DEBUG - close.started 2025-05-29 16:16:16,048 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:16:16,073 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 07:16:16 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 16:16:16,074 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 16:16:16,074 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:16:16,074 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:16:16,074 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:16:16,074 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:16:16,074 - httpcore.connection - DEBUG - close.started 2025-05-29 16:16:16,074 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:16:16,750 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 16:16:16,967 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 16:16:50,011 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:50,012 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:50,012 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 16:16:50,012 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:16:50,012 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:50,012 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:50,012 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:16:50,012 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:50,012 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:56,212 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:56,212 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:56,212 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:16:56,212 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:56,212 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:56,212 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:16:56,212 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:16:56,212 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,382 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,382 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,383 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:17:00,383 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,383 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,383 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:17:00,383 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,383 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,534 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,534 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,534 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:17:00,534 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,534 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,534 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:17:00,534 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:00,535 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:02,112 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:02,112 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:02,112 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:17:02,112 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:02,112 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:02,112 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:17:02,112 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:02,112 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:02,112 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:17:02,112 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-29 16:17:02,112 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-29 16:17:02,112 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-29 16:17:02,112 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': [{'name': 'RTX 5090', 'memory_mb': 32768}]} 2025-05-29 16:17:02,112 - auto_diffusers - DEBUG - GPU detected with 32.0 GB VRAM 2025-05-29 16:17:02,112 - auto_diffusers - INFO - Selected optimization profile: performance 2025-05-29 16:17:02,112 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-29 16:17:02,113 - auto_diffusers - DEBUG - Prompt length: 3456 characters 2025-05-29 16:17:02,113 - auto_diffusers - INFO - ================================================================================ 2025-05-29 16:17:02,113 - auto_diffusers - INFO - PROMPT SENT TO GEMINI API: 2025-05-29 16:17:02,113 - auto_diffusers - INFO - ================================================================================ 2025-05-29 16:17:02,113 - auto_diffusers - INFO - You are an expert in optimizing diffusers library code for different hardware configurations. NOTE: Advanced tool calling features are available when dependencies are installed. TASK: Generate optimized Python code for running a diffusion model with the following specifications: - Model: black-forest-labs/FLUX.1-schnell - Prompt: "A cat holding a sign that says hello world" - Image size: 768x1360 - Inference steps: 4 HARDWARE SPECIFICATIONS: - Platform: Linux (manual_input) - CPU Cores: 8 - CUDA Available: False - MPS Available: False - Optimization Profile: performance - GPU: RTX 5090 (32.0 GB VRAM) MEMORY ANALYSIS: - Model Memory Requirements: 36.0 GB (FP16 inference) - Model Weights Size: 24.0 GB (FP16) - Memory Recommendation: ⚠️ Model weights fit, enable memory optimizations - Recommended Precision: float16 - Attention Slicing Recommended: True - VAE Slicing Recommended: True OPTIMIZATION REQUIREMENTS: Please scrape and analyze the latest optimization techniques from this URL: https://huggingface.co/docs/diffusers/main/en/optimization IMPORTANT: For FLUX.1-schnell models, do NOT include guidance_scale parameter as it's not needed. Based on the hardware specs and optimization profile, generate Python code that includes: 1. **Memory Optimizations** (if low VRAM): - Model offloading (enable_model_cpu_offload, enable_sequential_cpu_offload) - Attention slicing (enable_attention_slicing) - VAE slicing (enable_vae_slicing) - Memory efficient attention 2. **Speed Optimizations**: - Appropriate torch.compile() usage - Optimal dtype selection (torch.float16, torch.bfloat16) - Device placement optimization 3. **Hardware-Specific Optimizations**: - CUDA optimizations for NVIDIA GPUs - MPS optimizations for Apple Silicon - CPU fallbacks when needed 4. **Model-Specific Optimizations**: - Appropriate scheduler selection - Optimal inference parameters - Pipeline configuration 5. **Data Type (dtype) Selection**: - If user specified a dtype, use that exact dtype in the code - If no dtype specified, automatically select the optimal dtype based on hardware: * Apple Silicon (MPS): prefer torch.bfloat16 * NVIDIA GPUs: prefer torch.float16 or torch.bfloat16 based on capability * CPU only: use torch.float32 - Add a comment explaining why that dtype was chosen IMPORTANT GUIDELINES: - Include all necessary imports - Add brief comments explaining optimization choices - Use the most current and effective optimization techniques - Ensure code is production-ready CODE STYLE REQUIREMENTS - GENERATE COMPACT CODE: - Assign static values directly to function arguments instead of using variables when possible - Minimize variable declarations - inline values where it improves readability - Reduce exception handling to essential cases only - assume normal operation - Use concise, direct code patterns - Combine operations where logical and readable - Avoid unnecessary intermediate variables - Keep code clean and minimal while maintaining functionality Examples of preferred compact style: - pipe = Pipeline.from_pretrained("model", torch_dtype=torch.float16) instead of storing dtype in variable - image = pipe("prompt", num_inference_steps=4, height=768, width=1360) instead of separate variables - Direct assignment: device = "cuda" if torch.cuda.is_available() else "cpu" Generate ONLY the Python code, no explanations before or after the code block. 2025-05-29 16:17:02,113 - auto_diffusers - INFO - ================================================================================ 2025-05-29 16:17:02,113 - auto_diffusers - INFO - Sending request to Gemini API 2025-05-29 16:17:17,152 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-29 16:17:17,153 - auto_diffusers - DEBUG - Response length: 2451 characters 2025-05-29 16:47:23,476 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 16:47:23,476 - __main__ - DEBUG - API key found, length: 39 2025-05-29 16:47:23,476 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 16:47:23,476 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 16:47:23,477 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 16:47:23,477 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 16:47:23,477 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 16:47:23,477 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 16:47:23,477 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 16:47:23,477 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 16:47:23,480 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 16:47:23,480 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 16:47:23,928 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 16:47:23,928 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 16:47:23,928 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 16:47:23,928 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 16:47:23,928 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 16:47:23,928 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 16:47:23,928 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 16:47:23,928 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 16:47:23,928 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 16:47:23,928 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 16:47:23,928 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 16:47:23,930 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 16:47:23,944 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 16:47:23,950 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 16:47:24,025 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 16:47:24,059 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 16:47:24,060 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:47:24,060 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:47:24,060 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:47:24,060 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:47:24,061 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:47:24,061 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:47:24,061 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 07:47:24 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 16:47:24,061 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 16:47:24,061 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:47:24,061 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:47:24,061 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:47:24,061 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:47:24,061 - httpcore.connection - DEBUG - close.started 2025-05-29 16:47:24,061 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:47:24,062 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 16:47:24,062 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:47:24,062 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:47:24,062 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:47:24,062 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:47:24,062 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:47:24,062 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:47:24,068 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 07:47:24 GMT'), (b'server', b'uvicorn'), (b'content-length', b'73064'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 16:47:24,068 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 16:47:24,068 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:47:24,068 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:47:24,068 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:47:24,068 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:47:24,068 - httpcore.connection - DEBUG - close.started 2025-05-29 16:47:24,068 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:47:24,079 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 16:47:24,140 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:47:24,140 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 16:47:24,220 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:47:24,220 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 16:47:24,223 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 16:47:24,415 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 16:47:24,415 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:47:24,415 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:47:24,415 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:47:24,415 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:47:24,415 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:47:24,504 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 16:47:24,505 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:47:24,505 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:47:24,505 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:47:24,505 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:47:24,505 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:47:24,553 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 07:47:24 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 16:47:24,554 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 16:47:24,554 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:47:24,554 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:47:24,554 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:47:24,554 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:47:24,554 - httpcore.connection - DEBUG - close.started 2025-05-29 16:47:24,554 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:47:24,648 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 07:47:24 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 16:47:24,648 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 16:47:24,648 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:47:24,648 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:47:24,648 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:47:24,648 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:47:24,649 - httpcore.connection - DEBUG - close.started 2025-05-29 16:47:24,649 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:47:25,332 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 16:47:25,554 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 16:47:35,239 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:35,239 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:35,239 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 16:47:35,239 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:47:35,239 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:35,239 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:35,239 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:47:35,240 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:35,240 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:40,282 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:40,282 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:40,282 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:47:40,282 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:40,282 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:40,282 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:47:40,282 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:40,282 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:43,895 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:43,895 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:43,895 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:47:43,895 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:43,895 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:43,895 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:47:43,895 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:43,896 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:44,048 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:44,048 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:44,048 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:47:44,048 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:44,048 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:44,048 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:47:44,048 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:44,048 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:48,011 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:48,011 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:48,011 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 16:47:48,011 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:48,011 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:48,011 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:47:48,011 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:48,012 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:48,012 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:47:48,012 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-29 16:47:48,012 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-29 16:47:48,012 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-29 16:47:48,012 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': [{'name': 'RTX 5090', 'memory_mb': 32768}]} 2025-05-29 16:47:48,012 - auto_diffusers - DEBUG - GPU detected with 32.0 GB VRAM 2025-05-29 16:47:48,012 - auto_diffusers - INFO - Selected optimization profile: performance 2025-05-29 16:47:48,012 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-29 16:47:48,012 - auto_diffusers - DEBUG - Prompt length: 7613 characters 2025-05-29 16:47:48,012 - auto_diffusers - INFO - ================================================================================ 2025-05-29 16:47:48,012 - auto_diffusers - INFO - PROMPT SENT TO GEMINI API: 2025-05-29 16:47:48,013 - auto_diffusers - INFO - ================================================================================ 2025-05-29 16:47:48,013 - auto_diffusers - INFO - You are an expert in optimizing diffusers library code for different hardware configurations. NOTE: This system includes curated optimization knowledge from HuggingFace documentation. TASK: Generate optimized Python code for running a diffusion model with the following specifications: - Model: black-forest-labs/FLUX.1-schnell - Prompt: "A cat holding a sign that says hello world" - Image size: 768x1360 - Inference steps: 4 HARDWARE SPECIFICATIONS: - Platform: Linux (manual_input) - CPU Cores: 8 - CUDA Available: False - MPS Available: False - Optimization Profile: performance - GPU: RTX 5090 (32.0 GB VRAM) MEMORY ANALYSIS: - Model Memory Requirements: 36.0 GB (FP16 inference) - Model Weights Size: 24.0 GB (FP16) - Memory Recommendation: ⚠️ Model weights fit, enable memory optimizations - Recommended Precision: float16 - Attention Slicing Recommended: True - VAE Slicing Recommended: True OPTIMIZATION KNOWLEDGE BASE: # DIFFUSERS OPTIMIZATION TECHNIQUES ## Memory Optimization Techniques ### 1. Model CPU Offloading Use `enable_model_cpu_offload()` to move models between GPU and CPU automatically: ```python pipe.enable_model_cpu_offload() ``` - Saves significant VRAM by keeping only active models on GPU - Automatic management, no manual intervention needed - Compatible with all pipelines ### 2. Sequential CPU Offloading Use `enable_sequential_cpu_offload()` for more aggressive memory saving: ```python pipe.enable_sequential_cpu_offload() ``` - More memory efficient than model offloading - Moves models to CPU after each forward pass - Best for very limited VRAM scenarios ### 3. Attention Slicing Use `enable_attention_slicing()` to reduce memory during attention computation: ```python pipe.enable_attention_slicing() # or specify slice size pipe.enable_attention_slicing("max") # maximum slicing pipe.enable_attention_slicing(1) # slice_size = 1 ``` - Trades compute time for memory - Most effective for high-resolution images - Can be combined with other techniques ### 4. VAE Slicing Use `enable_vae_slicing()` for large batch processing: ```python pipe.enable_vae_slicing() ``` - Decodes images one at a time instead of all at once - Essential for batch sizes > 4 - Minimal performance impact on single images ### 5. VAE Tiling Use `enable_vae_tiling()` for high-resolution image generation: ```python pipe.enable_vae_tiling() ``` - Enables 4K+ image generation on 8GB VRAM - Splits images into overlapping tiles - Automatically disabled for 512x512 or smaller images ### 6. Memory Efficient Attention (xFormers) Use `enable_xformers_memory_efficient_attention()` if xFormers is installed: ```python pipe.enable_xformers_memory_efficient_attention() ``` - Significantly reduces memory usage and improves speed - Requires xformers library installation - Compatible with most models ## Performance Optimization Techniques ### 1. Half Precision (FP16/BF16) Use lower precision for better memory and speed: ```python # FP16 (widely supported) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) # BF16 (better numerical stability, newer hardware) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) ``` - FP16: Halves memory usage, widely supported - BF16: Better numerical stability, requires newer GPUs - Essential for most optimization scenarios ### 2. Torch Compile (PyTorch 2.0+) Use `torch.compile()` for significant speed improvements: ```python pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) # For some models, compile VAE too: pipe.vae.decode = torch.compile(pipe.vae.decode, mode="reduce-overhead", fullgraph=True) ``` - 5-50% speed improvement - Requires PyTorch 2.0+ - First run is slower due to compilation ### 3. Fast Schedulers Use faster schedulers for fewer steps: ```python from diffusers import LMSDiscreteScheduler, UniPCMultistepScheduler # LMS Scheduler (good quality, fast) pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) # UniPC Scheduler (fastest) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` ## Hardware-Specific Optimizations ### NVIDIA GPU Optimizations ```python # Enable Tensor Cores torch.backends.cudnn.benchmark = True # Optimal data type for NVIDIA torch_dtype = torch.float16 # or torch.bfloat16 for RTX 30/40 series ``` ### Apple Silicon (MPS) Optimizations ```python # Use MPS device device = "mps" if torch.backends.mps.is_available() else "cpu" pipe = pipe.to(device) # Recommended dtype for Apple Silicon torch_dtype = torch.bfloat16 # Better than float16 on Apple Silicon # Attention slicing often helps on MPS pipe.enable_attention_slicing() ``` ### CPU Optimizations ```python # Use float32 for CPU torch_dtype = torch.float32 # Enable optimized attention pipe.enable_attention_slicing() ``` ## Model-Specific Guidelines ### FLUX Models - Do NOT use guidance_scale parameter (not needed for FLUX) - Use 4-8 inference steps maximum - BF16 dtype recommended - Enable attention slicing for memory optimization ### Stable Diffusion XL - Enable attention slicing for high resolutions - Use refiner model sparingly to save memory - Consider VAE tiling for >1024px images ### Stable Diffusion 1.5/2.1 - Very memory efficient base models - Can often run without optimizations on 8GB+ VRAM - Enable VAE slicing for batch processing ## Memory Usage Estimation - FLUX.1: ~24GB for full precision, ~12GB for FP16 - SDXL: ~7GB for FP16, ~14GB for FP32 - SD 1.5: ~2GB for FP16, ~4GB for FP32 ## Optimization Combinations by VRAM ### 24GB+ VRAM (High-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` ### 12-24GB VRAM (Mid-range) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_model_cpu_offload() pipe.enable_xformers_memory_efficient_attention() ``` ### 8-12GB VRAM (Entry-level) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing() pipe.enable_vae_slicing() pipe.enable_xformers_memory_efficient_attention() ``` ### <8GB VRAM (Low-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing("max") pipe.enable_vae_slicing() pipe.enable_vae_tiling() ``` IMPORTANT: For FLUX.1-schnell models, do NOT include guidance_scale parameter as it's not needed. Using the OPTIMIZATION KNOWLEDGE BASE above, generate Python code that: 1. **Selects the best optimization techniques** for the specific hardware profile 2. **Applies appropriate memory optimizations** based on available VRAM 3. **Uses optimal data types** for the target hardware: - User specified dtype (if provided): Use exactly as specified - Apple Silicon (MPS): prefer torch.bfloat16 - NVIDIA GPUs: prefer torch.float16 or torch.bfloat16 - CPU only: use torch.float32 4. **Implements hardware-specific optimizations** (CUDA, MPS, CPU) 5. **Follows model-specific guidelines** (e.g., FLUX guidance_scale handling) IMPORTANT GUIDELINES: - Reference the OPTIMIZATION KNOWLEDGE BASE to select appropriate techniques - Include all necessary imports - Add brief comments explaining optimization choices - Generate compact, production-ready code - Inline values where possible for concise code - Generate ONLY the Python code, no explanations before or after the code block 2025-05-29 16:47:48,013 - auto_diffusers - INFO - ================================================================================ 2025-05-29 16:47:48,013 - auto_diffusers - INFO - Sending request to Gemini API 2025-05-29 16:48:09,467 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-29 16:48:09,467 - auto_diffusers - DEBUG - Response length: 3996 characters 2025-05-29 16:57:34,668 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 16:57:34,668 - __main__ - DEBUG - API key found, length: 39 2025-05-29 16:57:34,668 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 16:57:34,668 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 16:57:34,668 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 16:57:34,668 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 16:57:34,668 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 16:57:34,668 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 16:57:34,668 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 16:57:34,668 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 16:57:34,672 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 16:57:34,672 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 16:57:35,129 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 16:57:35,129 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 16:57:35,129 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 16:57:35,129 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 16:57:35,129 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 16:57:35,129 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 16:57:35,129 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 16:57:35,129 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 16:57:35,129 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 16:57:35,129 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 16:57:35,129 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 16:57:35,131 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 16:57:35,145 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 16:57:35,145 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 16:57:35,222 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 16:57:35,257 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 16:57:35,257 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:57:35,258 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:57:35,258 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:57:35,258 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:57:35,258 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:57:35,258 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:57:35,258 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 07:57:35 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 16:57:35,259 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 16:57:35,259 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:57:35,259 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:57:35,259 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:57:35,259 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:57:35,259 - httpcore.connection - DEBUG - close.started 2025-05-29 16:57:35,259 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:57:35,259 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 16:57:35,260 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:57:35,260 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:57:35,260 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:57:35,260 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:57:35,260 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:57:35,260 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:57:35,265 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 07:57:35 GMT'), (b'server', b'uvicorn'), (b'content-length', b'75554'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 16:57:35,265 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 16:57:35,265 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:57:35,265 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:57:35,265 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:57:35,266 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:57:35,266 - httpcore.connection - DEBUG - close.started 2025-05-29 16:57:35,266 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:57:35,276 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 16:57:35,346 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:57:35,346 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 16:57:35,425 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 16:57:35,434 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 16:57:35,434 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 16:57:35,637 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 16:57:35,638 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:57:35,638 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:57:35,638 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:57:35,638 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:57:35,638 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:57:35,751 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 16:57:35,751 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 16:57:35,752 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 16:57:35,752 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 16:57:35,752 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 16:57:35,752 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 16:57:35,786 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 07:57:35 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 16:57:35,787 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 16:57:35,787 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:57:35,787 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:57:35,787 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:57:35,787 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:57:35,787 - httpcore.connection - DEBUG - close.started 2025-05-29 16:57:35,788 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:57:35,912 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 07:57:35 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 16:57:35,912 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 16:57:35,912 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 16:57:35,912 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 16:57:35,912 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 16:57:35,912 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 16:57:35,912 - httpcore.connection - DEBUG - close.started 2025-05-29 16:57:35,912 - httpcore.connection - DEBUG - close.complete 2025-05-29 16:57:36,487 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 16:57:36,707 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 16:57:49,246 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:57:49,246 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:57:49,246 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 16:57:49,246 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 16:57:49,246 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:57:49,246 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 16:57:49,247 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 16:57:49,247 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 16:57:49,247 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 17:00:22,113 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 17:00:22,113 - __main__ - DEBUG - API key found, length: 39 2025-05-29 17:00:22,113 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 17:00:22,113 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 17:00:22,113 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 17:00:22,113 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 17:00:22,113 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 17:00:22,113 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 17:00:22,113 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 17:00:22,113 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 17:00:22,117 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 17:00:22,117 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 17:00:22,530 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 17:00:22,530 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 17:00:22,530 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 17:00:22,530 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 17:00:22,530 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 17:00:22,530 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 17:00:22,530 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 17:00:22,530 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 17:00:22,530 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 17:00:22,530 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 17:00:22,530 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 17:00:22,532 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 17:00:22,545 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 17:00:22,550 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 17:00:22,624 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 17:00:22,657 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 17:00:22,657 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:00:22,657 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:00:22,657 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:00:22,658 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:00:22,658 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:00:22,658 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:00:22,658 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:00:22 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 17:00:22,658 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 17:00:22,658 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:00:22,658 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:00:22,658 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:00:22,658 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:00:22,658 - httpcore.connection - DEBUG - close.started 2025-05-29 17:00:22,658 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:00:22,659 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 17:00:22,659 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:00:22,659 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:00:22,659 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:00:22,659 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:00:22,659 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:00:22,659 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:00:22,665 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:00:22 GMT'), (b'server', b'uvicorn'), (b'content-length', b'75615'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 17:00:22,665 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 17:00:22,665 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:00:22,665 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:00:22,665 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:00:22,665 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:00:22,665 - httpcore.connection - DEBUG - close.started 2025-05-29 17:00:22,665 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:00:22,676 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 17:00:22,750 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:00:22,750 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 17:00:22,815 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:00:22,815 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 17:00:22,823 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 17:00:23,027 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 17:00:23,028 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:00:23,028 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:00:23,029 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:00:23,029 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:00:23,029 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:00:23,090 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 17:00:23,090 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:00:23,091 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:00:23,091 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:00:23,091 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:00:23,091 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:00:23,199 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:00:23 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 17:00:23,201 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 17:00:23,201 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:00:23,201 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:00:23,201 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:00:23,201 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:00:23,201 - httpcore.connection - DEBUG - close.started 2025-05-29 17:00:23,202 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:00:23,232 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:00:23 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 17:00:23,232 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 17:00:23,232 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:00:23,233 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:00:23,233 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:00:23,233 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:00:23,233 - httpcore.connection - DEBUG - close.started 2025-05-29 17:00:23,233 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:00:23,883 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 17:00:24,103 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 17:00:34,004 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 17:00:34,004 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 17:00:34,005 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 17:00:34,005 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 17:00:34,005 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 17:00:34,005 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 17:00:34,005 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 17:00:34,005 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 17:00:34,005 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 17:03:33,448 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 17:03:33,448 - __main__ - DEBUG - API key found, length: 39 2025-05-29 17:03:33,448 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 17:03:33,448 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 17:03:33,448 - auto_diffusers - DEBUG - Creating tools for Gemini 2025-05-29 17:03:33,448 - auto_diffusers - INFO - Created 3 tools for Gemini 2025-05-29 17:03:33,448 - auto_diffusers - INFO - Successfully configured Gemini AI model with tools 2025-05-29 17:03:33,448 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 17:03:33,448 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 17:03:33,448 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 17:03:33,448 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.12.9 2025-05-29 17:03:33,448 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 17:03:33,452 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 17:03:33,452 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 17:03:33,924 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 17:03:33,924 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 17:03:33,924 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 17:03:33,924 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.12.9', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 17:03:33,924 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 17:03:33,924 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 17:03:33,925 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 17:03:33,925 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 17:03:33,925 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 17:03:33,925 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 17:03:33,925 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 17:03:33,927 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 17:03:33,940 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 17:03:33,947 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 17:03:33,995 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 17:03:34,042 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 17:03:34,042 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:03:34,042 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:03:33 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 17:03:34,043 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 17:03:34,043 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:03:34,044 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:03:34,044 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:03:34,044 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:03:34,044 - httpcore.connection - DEBUG - close.started 2025-05-29 17:03:34,044 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:03:34,044 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 17:03:34,044 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:03:34,044 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:03:34,045 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:03:34,045 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:03:34,045 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:03:34,045 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:03:33 GMT'), (b'server', b'uvicorn'), (b'content-length', b'74295'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 17:03:34,051 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:03:34,051 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:03:34,051 - httpcore.connection - DEBUG - close.started 2025-05-29 17:03:34,051 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:03:34,063 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 17:03:34,200 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 17:03:34,469 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:03:34,470 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 17:03:34,476 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:03:34,476 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 17:03:34,760 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 17:03:34,761 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:03:34,761 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:03:34,761 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:03:34,761 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:03:34,762 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:03:34,771 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 17:03:34,771 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:03:34,772 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:03:34,772 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:03:34,772 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:03:34,772 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:03:34,907 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:03:34 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 17:03:34,907 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 17:03:34,907 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:03:34,908 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:03:34,908 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:03:34,908 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:03:34,908 - httpcore.connection - DEBUG - close.started 2025-05-29 17:03:34,908 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:03:34,919 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:03:34 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 17:03:34,919 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 17:03:34,919 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:03:34,919 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:03:34,920 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:03:34,920 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:03:34,920 - httpcore.connection - DEBUG - close.started 2025-05-29 17:03:34,920 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:03:35,503 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 17:03:35,733 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 17:05:44,828 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 17:05:44,828 - __main__ - DEBUG - API key found, length: 39 2025-05-29 17:05:44,828 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 17:05:44,828 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 17:05:44,828 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 17:05:44,828 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 17:05:44,828 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 17:05:44,828 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 17:05:44,828 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 17:05:44,828 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 17:05:44,831 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 17:05:44,832 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 17:05:45,252 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 17:05:45,252 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 17:05:45,252 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 17:05:45,252 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 17:05:45,252 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 17:05:45,252 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 17:05:45,252 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 17:05:45,252 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 17:05:45,252 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 17:05:45,252 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 17:05:45,252 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 17:05:45,254 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 17:05:45,267 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 17:05:45,272 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 17:05:45,344 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 17:05:45,377 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 17:05:45,377 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:05:45,378 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:05:45 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 17:05:45,379 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 17:05:45,379 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:05:45,379 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:05:45,379 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:05:45,379 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:05:45,379 - httpcore.connection - DEBUG - close.started 2025-05-29 17:05:45,379 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:05:45,379 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 17:05:45,379 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:05:45,380 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 08:05:45 GMT'), (b'server', b'uvicorn'), (b'content-length', b'75706'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 17:05:45,385 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:05:45,385 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:05:45,385 - httpcore.connection - DEBUG - close.started 2025-05-29 17:05:45,385 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:05:45,396 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 17:05:45,466 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:05:45,466 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 17:05:45,538 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 17:05:45,538 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 17:05:45,548 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 17:05:45,746 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 17:05:45,746 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:05:45,747 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:05:45,747 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:05:45,747 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:05:45,747 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:05:45,821 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 17:05:45,821 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 17:05:45,822 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 17:05:45,822 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 17:05:45,822 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 17:05:45,822 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 17:05:45,885 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:05:45 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 17:05:45,886 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 17:05:45,886 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:05:45,887 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:05:45,887 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:05:45,887 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:05:45,887 - httpcore.connection - DEBUG - close.started 2025-05-29 17:05:45,888 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:05:45,965 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 08:05:45 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 17:05:45,965 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 17:05:45,966 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 17:05:45,967 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 17:05:45,967 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 17:05:45,967 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 17:05:45,968 - httpcore.connection - DEBUG - close.started 2025-05-29 17:05:45,968 - httpcore.connection - DEBUG - close.complete 2025-05-29 17:05:46,631 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 17:05:46,857 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 17:05:55,606 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 17:05:55,606 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 17:05:55,606 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 17:05:55,606 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 17:05:55,606 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 17:05:55,607 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 17:05:55,607 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 17:05:55,607 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 17:05:55,607 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:38:26,490 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:38:26,490 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:38:26,490 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:38:26,490 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:38:26,490 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 23:38:26,490 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:38:26,490 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:38:26,490 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:38:26,491 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 23:38:26,491 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:38:26,494 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:38:26,494 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:38:26,909 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:38:26,909 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:38:26,909 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:38:26,909 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:38:26,909 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:38:26,909 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:38:26,909 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:38:26,909 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:38:26,909 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:38:26,909 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:38:26,909 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:38:26,911 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:38:26,924 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:38:26,929 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:38:27,000 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:38:27,034 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=None socket_options=None 2025-05-29 23:38:27,035 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:38:27,035 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:38:27,035 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:38:27,035 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:38:27,035 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:38:27,035 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:38:27,035 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:38:27 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 23:38:27,036 - httpx - INFO - HTTP Request: GET http://localhost:7860/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 23:38:27,036 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:38:27,036 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:38:27,036 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:38:27,036 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:38:27,036 - httpcore.connection - DEBUG - close.started 2025-05-29 23:38:27,036 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:38:27,036 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7860 local_address=None timeout=3 socket_options=None 2025-05-29 23:38:27,037 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:38:27,037 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:38:27,037 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:38:27,037 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:38:27,037 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:38:27,037 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:38:27,042 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:38:27 GMT'), (b'server', b'uvicorn'), (b'content-length', b'75707'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 23:38:27,042 - httpx - INFO - HTTP Request: HEAD http://localhost:7860/ "HTTP/1.1 200 OK" 2025-05-29 23:38:27,043 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:38:27,043 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:38:27,043 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:38:27,043 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:38:27,043 - httpcore.connection - DEBUG - close.started 2025-05-29 23:38:27,043 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:38:27,053 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 23:38:27,124 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:38:27,124 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:38:27,198 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:38:27,198 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 23:38:27,215 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:38:27,405 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:38:27,406 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:38:27,406 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:38:27,406 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:38:27,407 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:38:27,407 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:38:27,488 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:38:27,489 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:38:27,489 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:38:27,489 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:38:27,490 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:38:27,490 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:38:27,549 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:38:27 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:38:27,550 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:38:27,550 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:38:27,550 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:38:27,550 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:38:27,550 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:38:27,551 - httpcore.connection - DEBUG - close.started 2025-05-29 23:38:27,551 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:38:27,636 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:38:27 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 23:38:27,636 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 23:38:27,637 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:38:27,637 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:38:27,638 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:38:27,638 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:38:27,638 - httpcore.connection - DEBUG - close.started 2025-05-29 23:38:27,638 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:38:28,286 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:38:28,884 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 23:38:42,661 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:38:42,662 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:38:42,662 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 23:38:42,662 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 23:38:42,662 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:38:42,662 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:38:42,662 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 23:38:42,662 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:38:42,662 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:43:46,493 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:43:46,494 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:43:46,494 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:43:46,494 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:43:46,494 - auto_diffusers - DEBUG - Creating tools for Gemini 2025-05-29 23:43:46,494 - auto_diffusers - INFO - Created 3 tools for Gemini 2025-05-29 23:43:46,494 - auto_diffusers - INFO - Successfully configured Gemini AI model with tools 2025-05-29 23:43:46,494 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:43:46,494 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:43:46,494 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:43:46,494 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.12.9 2025-05-29 23:43:46,494 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:43:46,497 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:43:46,497 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:43:46,942 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:43:46,942 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:43:46,942 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:43:46,942 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.12.9', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:43:46,942 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:43:46,942 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:43:46,942 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:43:46,942 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:43:46,942 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:43:46,942 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:43:46,942 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:43:46,944 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:43:46,945 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:43:46,963 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:43:47,014 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:43:47,169 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:43:47,170 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:43:47,218 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:43:47,500 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:43:47,500 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:43:47,501 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:43:47,501 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:43:47,502 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:43:47,502 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:43:47,667 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:43:47 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:43:47,669 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:43:47,669 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:43:47,669 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:43:47,670 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:43:47,670 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:43:47,670 - httpcore.connection - DEBUG - close.started 2025-05-29 23:43:47,671 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:45:37,625 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:45:37,625 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:45:37,625 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:45:37,625 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:45:37,625 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 23:45:37,625 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:45:37,625 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:45:37,626 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:45:37,626 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 23:45:37,626 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:45:37,628 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:45:37,629 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:45:38,048 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:45:38,048 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:45:38,048 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:45:38,048 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:45:38,048 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:45:38,048 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:45:38,048 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:45:38,048 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:45:38,048 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:45:38,048 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:45:38,048 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:45:38,051 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:45:38,064 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:45:38,071 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:45:38,143 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:45:38,178 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-29 23:45:38,179 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:45:38,179 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:45:38,179 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:45:38,179 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:45:38,180 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:45:38,180 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:45:38,180 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:45:38 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 23:45:38,180 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 23:45:38,180 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:45:38,180 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:45:38,180 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:45:38,180 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:45:38,180 - httpcore.connection - DEBUG - close.started 2025-05-29 23:45:38,180 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:45:38,181 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-29 23:45:38,181 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:45:38,181 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:45:38,181 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:45:38,181 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:45:38,181 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:45:38,181 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:45:38,188 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:45:38 GMT'), (b'server', b'uvicorn'), (b'content-length', b'113569'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 23:45:38,188 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-29 23:45:38,188 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:45:38,188 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:45:38,188 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:45:38,188 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:45:38,188 - httpcore.connection - DEBUG - close.started 2025-05-29 23:45:38,189 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:45:38,200 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 23:45:38,227 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:45:38,227 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:45:38,337 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:45:38,338 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 23:45:38,349 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:45:38,503 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:45:38,503 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:45:38,503 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:45:38,503 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:45:38,503 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:45:38,503 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:45:38,611 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:45:38,611 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:45:38,611 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:45:38,611 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:45:38,611 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:45:38,611 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:45:38,641 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:45:38 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:45:38,642 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:45:38,642 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:45:38,642 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:45:38,642 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:45:38,642 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:45:38,642 - httpcore.connection - DEBUG - close.started 2025-05-29 23:45:38,642 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:45:38,750 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:45:38 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 23:45:38,750 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 23:45:38,750 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:45:38,751 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:45:38,751 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:45:38,751 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:45:38,751 - httpcore.connection - DEBUG - close.started 2025-05-29 23:45:38,751 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:45:39,336 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:45:39,554 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 23:45:56,868 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:45:56,868 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:45:56,868 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 23:45:56,869 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 23:45:56,869 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:45:56,869 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:45:56,869 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 23:45:56,869 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:45:56,869 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:46:55,462 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:46:55,462 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:46:55,462 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 23:46:55,462 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:46:55,462 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:46:55,462 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 23:46:55,462 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:46:55,462 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,722 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,722 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,722 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 23:47:01,723 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,723 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,723 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 23:47:01,723 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,723 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,774 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,775 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,775 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 32.0GB VRAM 2025-05-29 23:47:01,775 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,775 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,775 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 23:47:01,775 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:47:01,775 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:48:44,089 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:48:44,089 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:48:44,089 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:48:44,089 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:48:44,089 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 23:48:44,089 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:48:44,089 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:48:44,089 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:48:44,089 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 23:48:44,089 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:48:44,092 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:48:44,092 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:48:44,496 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:48:44,497 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:48:44,497 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:48:44,497 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:48:44,497 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:48:44,497 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:48:44,497 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:48:44,497 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:48:44,497 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:48:44,497 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:48:44,497 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:48:44,499 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:48:44,512 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:48:44,517 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:48:44,590 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:48:44,625 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-29 23:48:44,626 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:48:44,626 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:48:44,626 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:48:44,626 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:48:44,626 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:48:44,627 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:48:44,627 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:48:44 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 23:48:44,627 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 23:48:44,627 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:48:44,627 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:48:44,627 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:48:44,627 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:48:44,627 - httpcore.connection - DEBUG - close.started 2025-05-29 23:48:44,627 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:48:44,628 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-29 23:48:44,628 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:48:44,628 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:48:44,628 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:48:44,629 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:48:44,629 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:48:44,629 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:48:44,635 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:48:44 GMT'), (b'server', b'uvicorn'), (b'content-length', b'113500'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 23:48:44,635 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-29 23:48:44,635 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:48:44,635 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:48:44,635 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:48:44,635 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:48:44,636 - httpcore.connection - DEBUG - close.started 2025-05-29 23:48:44,636 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:48:44,647 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 23:48:44,676 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:48:44,676 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:48:44,796 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:48:44,796 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 23:48:44,801 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:48:44,957 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:48:44,958 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:48:44,958 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:48:44,958 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:48:44,958 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:48:44,958 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:48:45,095 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:48:45,096 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:48:45,096 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:48:45,096 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:48:45,096 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:48:45,096 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:48:45,100 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:48:45 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:48:45,100 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:48:45,101 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:48:45,101 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:48:45,101 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:48:45,101 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:48:45,101 - httpcore.connection - DEBUG - close.started 2025-05-29 23:48:45,101 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:48:45,247 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:48:45 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 23:48:45,248 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 23:48:45,248 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:48:45,249 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:48:45,249 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:48:45,249 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:48:45,250 - httpcore.connection - DEBUG - close.started 2025-05-29 23:48:45,250 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:48:45,928 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:48:46,505 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 23:48:47,065 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:48:47,065 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:48:47,065 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 23:48:47,065 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 23:48:47,065 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:48:47,065 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:48:47,065 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 23:48:47,065 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:48:47,065 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:50:45,671 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:50:45,671 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:50:45,671 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:50:45,671 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:50:45,671 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 23:50:45,671 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:50:45,671 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:50:45,671 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:50:45,671 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 23:50:45,671 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:50:45,675 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:50:45,675 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:50:46,156 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:50:46,156 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:50:46,156 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:50:46,156 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:50:46,156 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:50:46,156 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:50:46,156 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:50:46,156 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:50:46,156 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:50:46,156 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:50:46,156 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:50:46,158 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:50:46,172 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:50:46,178 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:50:46,266 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:50:46,302 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-29 23:50:46,303 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:50:46,303 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:50:46,303 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:50:46,303 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:50:46,303 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:50:46,304 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:50:46,304 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:50:46 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 23:50:46,304 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 23:50:46,304 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:50:46,304 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:50:46,304 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:50:46,304 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:50:46,304 - httpcore.connection - DEBUG - close.started 2025-05-29 23:50:46,305 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:50:46,305 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-29 23:50:46,305 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:50:46,305 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:50:46,306 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:50:46,306 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:50:46,306 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:50:46,306 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:50:46,312 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:50:46 GMT'), (b'server', b'uvicorn'), (b'content-length', b'111775'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 23:50:46,312 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-29 23:50:46,313 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:50:46,313 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:50:46,313 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:50:46,313 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:50:46,313 - httpcore.connection - DEBUG - close.started 2025-05-29 23:50:46,313 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:50:46,324 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 23:50:46,376 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:50:46,376 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:50:46,463 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:50:46,464 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 23:50:46,471 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:50:46,648 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:50:46,648 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:50:46,648 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:50:46,648 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:50:46,649 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:50:46,649 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:50:46,744 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:50:46,744 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:50:46,744 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:50:46,744 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:50:46,744 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:50:46,744 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:50:46,786 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:50:46 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:50:46,786 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:50:46,786 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:50:46,786 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:50:46,786 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:50:46,786 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:50:46,786 - httpcore.connection - DEBUG - close.started 2025-05-29 23:50:46,787 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:50:46,885 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:50:46 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 23:50:46,885 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 23:50:46,885 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:50:46,885 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:50:46,885 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:50:46,885 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:50:46,886 - httpcore.connection - DEBUG - close.started 2025-05-29 23:50:46,886 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:50:47,013 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:50:47,013 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:50:47,014 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 23:50:47,014 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 23:50:47,014 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:50:47,014 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:50:47,014 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 23:50:47,014 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:50:47,014 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:50:47,517 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:50:47,739 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 23:54:53,253 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:54:53,253 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:54:53,253 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:54:53,253 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:54:53,253 - auto_diffusers - DEBUG - Creating tools for Gemini 2025-05-29 23:54:53,253 - auto_diffusers - INFO - Created 3 tools for Gemini 2025-05-29 23:54:53,253 - auto_diffusers - INFO - Successfully configured Gemini AI model with tools 2025-05-29 23:54:53,253 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:54:53,253 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:54:53,253 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:54:53,253 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.12.9 2025-05-29 23:54:53,253 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:54:53,258 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:54:53,258 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:54:53,724 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:54:53,724 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:54:53,724 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:54:53,724 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.12.9', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:54:53,724 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:54:53,724 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:54:53,724 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:54:53,724 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:54:53,724 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:54:53,724 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:54:53,724 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:54:53,726 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:54:53,734 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:54:53,740 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:54:53,988 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:54:53,989 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:54:53,989 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:54:54,271 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:54:54,271 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:54:54,272 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:54:54,272 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:54:54,272 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:54:54,272 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:54:54,415 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:54:54 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:54:54,416 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:54:54,416 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:54:54,416 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:54:54,416 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:54:54,416 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:54:54,416 - httpcore.connection - DEBUG - close.started 2025-05-29 23:54:54,416 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:55:03,477 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:55:03,477 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:55:03,477 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:55:03,477 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:55:03,477 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 23:55:03,477 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:55:03,477 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:55:03,477 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:55:03,477 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 23:55:03,477 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:55:03,481 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:55:03,481 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:55:03,929 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:55:03,929 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:55:03,929 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:55:03,929 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:55:03,929 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:55:03,929 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:55:03,929 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:55:03,929 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:55:03,929 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:55:03,929 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:55:03,929 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:55:03,931 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:55:03,944 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:55:03,950 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:55:04,086 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:55:04,086 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:55:04,359 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:55:04,359 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:55:04,360 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:55:04,360 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:55:04,360 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:55:04,360 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:55:04,410 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:55:04,498 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:55:04 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:55:04,500 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:55:04,500 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:55:04,501 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:55:04,501 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:55:04,501 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:55:04,501 - httpcore.connection - DEBUG - close.started 2025-05-29 23:55:04,502 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:55:14,094 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:55:14,094 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:55:14,094 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:55:14,094 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:55:14,094 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 23:55:14,094 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:55:14,094 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:55:14,094 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:55:14,094 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 23:55:14,094 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:55:14,097 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:55:14,098 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:55:14,515 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:55:14,515 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:55:14,515 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:55:14,516 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:55:14,516 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:55:14,516 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:55:14,516 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:55:14,516 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:55:14,516 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:55:14,516 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:55:14,516 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:55:14,518 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:55:14,530 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:55:14,536 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:55:14,678 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:55:14,679 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:55:14,759 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:55:14,964 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:55:14,965 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:55:14,965 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:55:14,965 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:55:14,965 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:55:14,965 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:55:15,107 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:55:15 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:55:15,108 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:55:15,108 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:55:15,108 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:55:15,108 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:55:15,108 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:55:15,108 - httpcore.connection - DEBUG - close.started 2025-05-29 23:55:15,109 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:55:43,365 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:55:43,366 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:55:43,366 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:55:43,366 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:55:43,366 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 23:55:43,366 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:55:43,366 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:55:43,366 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:55:43,366 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 23:55:43,366 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:55:43,369 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:55:43,369 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:55:43,790 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:55:43,790 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:55:43,790 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:55:43,790 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:55:43,790 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:55:43,790 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:55:43,790 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:55:43,790 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:55:43,790 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:55:43,790 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:55:43,790 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:55:43,792 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:55:43,804 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:55:43,811 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:55:43,947 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:55:43,947 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:55:44,023 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:55:44,220 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:55:44,220 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:55:44,221 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:55:44,221 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:55:44,221 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:55:44,221 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:55:44,358 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:55:44 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:55:44,359 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:55:44,359 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:55:44,359 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:55:44,359 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:55:44,359 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:55:44,359 - httpcore.connection - DEBUG - close.started 2025-05-29 23:55:44,359 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:56:37,153 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-29 23:56:37,153 - __main__ - DEBUG - API key found, length: 39 2025-05-29 23:56:37,153 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-29 23:56:37,153 - auto_diffusers - DEBUG - API key length: 39 2025-05-29 23:56:37,153 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-29 23:56:37,153 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-29 23:56:37,153 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-29 23:56:37,153 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-29 23:56:37,153 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-29 23:56:37,153 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-29 23:56:37,156 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-29 23:56:37,156 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-29 23:56:37,560 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-29 23:56:37,560 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-29 23:56:37,560 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-29 23:56:37,560 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-29 23:56:37,560 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-29 23:56:37,560 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-29 23:56:37,560 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-29 23:56:37,560 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-29 23:56:37,560 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-29 23:56:37,560 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-29 23:56:37,560 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-29 23:56:37,562 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:56:37,575 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-29 23:56:37,581 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:56:37,658 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-29 23:56:37,691 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-29 23:56:37,692 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:56:37,692 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:56:37,692 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:56:37,692 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:56:37,692 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:56:37,692 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:56:37,692 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:56:37 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-29 23:56:37,693 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-29 23:56:37,693 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:56:37,693 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:56:37,693 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:56:37,693 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:56:37,693 - httpcore.connection - DEBUG - close.started 2025-05-29 23:56:37,693 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:56:37,693 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-29 23:56:37,694 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:56:37,694 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:56:37,694 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:56:37,694 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:56:37,694 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:56:37,694 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:56:37,700 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 14:56:37 GMT'), (b'server', b'uvicorn'), (b'content-length', b'106594'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-29 23:56:37,700 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-29 23:56:37,700 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:56:37,700 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:56:37,700 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:56:37,700 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:56:37,701 - httpcore.connection - DEBUG - close.started 2025-05-29 23:56:37,701 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:56:37,711 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-29 23:56:37,874 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-29 23:56:37,902 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:56:37,902 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-29 23:56:37,903 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-29 23:56:37,903 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-29 23:56:38,185 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:56:38,185 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:56:38,185 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:56:38,185 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:56:38,185 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:56:38,185 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:56:38,187 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-29 23:56:38,187 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-29 23:56:38,187 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-29 23:56:38,187 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-29 23:56:38,187 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-29 23:56:38,187 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-29 23:56:38,330 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:56:38 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-29 23:56:38,330 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-29 23:56:38,331 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:56:38,331 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 14:56:38 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-29 23:56:38,331 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:56:38,332 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-29 23:56:38,332 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:56:38,332 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-29 23:56:38,332 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:56:38,333 - httpcore.connection - DEBUG - close.started 2025-05-29 23:56:38,333 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-29 23:56:38,333 - httpcore.http11 - DEBUG - response_closed.started 2025-05-29 23:56:38,333 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:56:38,333 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-29 23:56:38,334 - httpcore.connection - DEBUG - close.started 2025-05-29 23:56:38,334 - httpcore.connection - DEBUG - close.complete 2025-05-29 23:56:39,018 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-29 23:56:39,237 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-29 23:56:42,308 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:56:42,308 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:56:42,308 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-29 23:56:42,308 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-29 23:56:42,308 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:56:42,308 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-29 23:56:42,308 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-29 23:56:42,308 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-29 23:56:42,308 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:00:01,476 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:00:01,476 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:00:01,476 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:00:01,476 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:00:01,476 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:00:01,476 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:00:01,476 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:00:01,476 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:00:01,476 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:00:01,476 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:00:01,480 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:00:01,481 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:00:01,916 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:00:01,916 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:00:01,916 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:00:01,916 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:00:01,916 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:00:01,916 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:00:01,916 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:00:01,916 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:00:01,916 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:00:01,916 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:00:01,916 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:00:01,918 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:00:01,930 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:00:01,936 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:00:02,010 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:00:02,048 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:00:02,049 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:00:02,049 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:00:02,049 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:00:02,049 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:00:02,049 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:00:02,049 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:00:02,050 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:00:02 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:00:02,050 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:00:02,050 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:00:02,050 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:00:02,050 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:00:02,050 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:00:02,050 - httpcore.connection - DEBUG - close.started 2025-05-30 00:00:02,050 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:00:02,050 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:00:02,052 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:00:02,052 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:00:02,052 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:00:02,052 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:00:02,052 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:00:02,052 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:00:02,058 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:00:02 GMT'), (b'server', b'uvicorn'), (b'content-length', b'109706'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:00:02,058 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:00:02,058 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:00:02,058 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:00:02,058 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:00:02,058 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:00:02,058 - httpcore.connection - DEBUG - close.started 2025-05-30 00:00:02,058 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:00:02,069 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:00:02,138 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:00:02,138 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:00:02,207 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:00:02,207 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:00:02,293 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:00:02,420 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:00:02,420 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:00:02,420 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:00:02,421 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:00:02,421 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:00:02,421 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:00:02,486 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:00:02,486 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:00:02,487 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:00:02,487 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:00:02,487 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:00:02,487 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:00:02,561 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:00:02 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:00:02,562 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:00:02,563 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:00:02,563 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:00:02,564 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:00:02,564 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:00:02,564 - httpcore.connection - DEBUG - close.started 2025-05-30 00:00:02,565 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:00:02,627 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:00:02 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:00:02,629 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:00:02,630 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:00:02,630 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:00:02,630 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:00:02,631 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:00:02,635 - httpcore.connection - DEBUG - close.started 2025-05-30 00:00:02,636 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:00:02,709 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:00:02,709 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:00:02,709 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:00:02,710 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:00:02,710 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:00:02,710 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:00:02,710 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:00:02,710 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:00:02,710 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:00:03,267 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:00:03,490 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:04:08,649 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:04:08,650 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:04:08,650 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:04:08,650 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:04:08,650 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:04:08,650 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:04:08,650 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:04:08,650 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:04:08,650 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:04:08,650 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:04:08,653 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:04:08,654 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:04:09,095 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:04:09,095 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:04:09,095 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:04:09,095 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:04:09,095 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:04:09,095 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:04:09,095 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:04:09,095 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:04:09,095 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:04:09,095 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:04:09,096 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:04:09,098 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:04:09,119 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:04:09,125 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:04:09,205 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:04:09,239 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:04:09,240 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:04:09,240 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:04:09,240 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:04:09,240 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:04:09,240 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:04:09,240 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:04:09,240 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:04:09 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:04:09,241 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:04:09,241 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:04:09,241 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:04:09,241 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:04:09,241 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:04:09,241 - httpcore.connection - DEBUG - close.started 2025-05-30 00:04:09,241 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:04:09,241 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:04:09,242 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:04:09,242 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:04:09,242 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:04:09,242 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:04:09,242 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:04:09,242 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:04:09,248 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:04:09 GMT'), (b'server', b'uvicorn'), (b'content-length', b'107757'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:04:09,248 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:04:09,248 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:04:09,248 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:04:09,248 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:04:09,248 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:04:09,248 - httpcore.connection - DEBUG - close.started 2025-05-30 00:04:09,248 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:04:09,259 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:04:09,286 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:04:09,286 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:04:09,399 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:04:09,399 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:04:09,404 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:04:09,567 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:04:09,567 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:04:09,567 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:04:09,567 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:04:09,567 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:04:09,567 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:04:09,678 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:04:09,678 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:04:09,678 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:04:09,678 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:04:09,678 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:04:09,678 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:04:09,710 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:04:09 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:04:09,710 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:04:09,710 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:04:09,710 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:04:09,710 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:04:09,710 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:04:09,710 - httpcore.connection - DEBUG - close.started 2025-05-30 00:04:09,711 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:04:09,820 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:04:09 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:04:09,820 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:04:09,820 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:04:09,820 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:04:09,820 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:04:09,820 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:04:09,820 - httpcore.connection - DEBUG - close.started 2025-05-30 00:04:09,820 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:04:10,304 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:04:10,304 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:04:10,305 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:04:10,305 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:04:10,305 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:04:10,305 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:04:10,305 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:04:10,305 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:04:10,305 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:04:10,453 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:04:10,668 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:05:09,731 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:05:09,732 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:05:09,732 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:05:09,732 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:05:09,732 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:05:09,732 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:05:09,732 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:05:09,732 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:05:09,732 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:05:09,732 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:05:09,736 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:05:09,736 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:05:10,155 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:05:10,155 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:05:10,155 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:05:10,155 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:05:10,155 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:05:10,155 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:05:10,155 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:05:10,155 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:05:10,155 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:05:10,155 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:05:10,155 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:05:10,157 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:05:10,170 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:05:10,177 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:05:10,271 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:05:10,302 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:05:10,303 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:05:10,303 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:05:10,303 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:05:10,303 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:05:10,303 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:05:10,304 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:05:10,304 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:05:10 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:05:10,304 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:05:10,304 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:05:10,304 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:05:10,304 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:05:10,304 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:05:10,304 - httpcore.connection - DEBUG - close.started 2025-05-30 00:05:10,304 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:05:10,304 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:05:10,305 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:05:10,305 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:05:10,305 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:05:10,305 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:05:10,305 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:05:10,305 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:05:10,311 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:05:10 GMT'), (b'server', b'uvicorn'), (b'content-length', b'107689'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:05:10,311 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:05:10,311 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:05:10,311 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:05:10,311 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:05:10,311 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:05:10,311 - httpcore.connection - DEBUG - close.started 2025-05-30 00:05:10,311 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:05:10,315 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:05:10,315 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:05:10,322 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:05:10,453 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:05:10,464 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:05:10,464 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:05:10,591 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:05:10,591 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:05:10,592 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:05:10,592 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:05:10,592 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:05:10,592 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:05:10,732 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:05:10 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:05:10,733 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:05:10,733 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:05:10,734 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:05:10,734 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:05:10,734 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:05:10,734 - httpcore.connection - DEBUG - close.started 2025-05-30 00:05:10,735 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:05:10,750 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:05:10,751 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:05:10,751 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:05:10,751 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:05:10,751 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:05:10,751 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:05:10,896 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:05:10 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:05:10,897 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:05:10,898 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:05:10,898 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:05:10,898 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:05:10,898 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:05:10,899 - httpcore.connection - DEBUG - close.started 2025-05-30 00:05:10,899 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:05:11,318 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:05:11,318 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:05:11,318 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:05:11,318 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:05:11,318 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:05:11,318 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:05:11,318 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:05:11,318 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:05:11,318 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:05:11,467 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:05:11,688 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:06:35,442 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:06:35,442 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:06:35,442 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:06:35,442 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:06:35,442 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:06:35,442 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:06:35,442 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:06:35,442 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:06:35,442 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:06:35,442 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:06:35,446 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:06:35,446 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:06:35,858 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:06:35,858 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:06:35,858 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:06:35,858 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:06:35,858 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:06:35,858 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:06:35,858 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:06:35,858 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:06:35,858 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:06:35,858 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:06:35,858 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:06:35,860 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:06:35,873 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:06:35,878 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:06:35,949 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:06:35,984 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:06:35,984 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:06:35,985 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:06:35,985 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:06:35,985 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:06:35,985 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:06:35,985 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:06:35,985 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:06:35 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:06:35,985 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:06:35,985 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:06:35,985 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:06:35,986 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:06:35,986 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:06:35,986 - httpcore.connection - DEBUG - close.started 2025-05-30 00:06:35,986 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:06:35,986 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:06:35,986 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:06:35,986 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:06:35,987 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:06:35,987 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:06:35,987 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:06:35,987 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:06:35,993 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:06:35 GMT'), (b'server', b'uvicorn'), (b'content-length', b'107788'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:06:35,993 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:06:35,993 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:06:35,993 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:06:35,993 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:06:35,993 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:06:35,993 - httpcore.connection - DEBUG - close.started 2025-05-30 00:06:35,993 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:06:36,003 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:06:36,066 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:06:36,066 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:06:36,146 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:06:36,146 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:06:36,188 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:06:36,349 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:06:36,350 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:06:36,351 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:06:36,351 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:06:36,351 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:06:36,351 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:06:36,439 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:06:36,441 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:06:36,443 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:06:36,444 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:06:36,444 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:06:36,444 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:06:36,493 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:06:36 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:06:36,494 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:06:36,494 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:06:36,495 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:06:36,495 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:06:36,495 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:06:36,495 - httpcore.connection - DEBUG - close.started 2025-05-30 00:06:36,496 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:06:36,588 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:06:36 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:06:36,589 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:06:36,589 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:06:36,590 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:06:36,590 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:06:36,590 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:06:36,590 - httpcore.connection - DEBUG - close.started 2025-05-30 00:06:36,590 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:06:37,231 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:06:37,458 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:06:55,393 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:06:55,393 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:06:55,393 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:06:55,393 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:06:55,393 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:06:55,393 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:06:55,393 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:06:55,393 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:06:55,394 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:08:02,686 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:08:02,686 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:08:02,686 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:08:02,686 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:08:02,686 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:08:02,686 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:08:02,687 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:08:02,687 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:08:02,687 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:08:02,687 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:08:02,690 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:08:02,690 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:08:03,143 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:08:03,143 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:08:03,144 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:08:03,144 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:08:03,144 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:08:03,144 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:08:03,144 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:08:03,144 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:08:03,144 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:08:03,144 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:08:03,144 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:08:03,146 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:08:03,160 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:08:03,166 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:08:03,239 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:08:03,273 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:08:03,274 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:08:03,274 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:08:03,274 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:08:03,275 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:08:03,275 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:08:03,275 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:08:03,275 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:08:03 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:08:03,275 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:08:03,275 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:08:03,275 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:08:03,275 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:08:03,275 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:08:03,275 - httpcore.connection - DEBUG - close.started 2025-05-30 00:08:03,275 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:08:03,276 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:08:03,276 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:08:03,276 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:08:03,276 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:08:03,276 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:08:03,276 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:08:03,276 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:08:03,283 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:08:03 GMT'), (b'server', b'uvicorn'), (b'content-length', b'107135'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:08:03,283 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:08:03,283 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:08:03,283 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:08:03,283 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:08:03,283 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:08:03,283 - httpcore.connection - DEBUG - close.started 2025-05-30 00:08:03,283 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:08:03,295 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:08:03,328 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:08:03,328 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:08:03,436 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:08:03,436 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:08:03,443 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:08:03,617 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:08:03,618 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:08:03,618 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:08:03,619 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:08:03,619 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:08:03,619 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:08:03,715 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:08:03,715 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:08:03,716 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:08:03,716 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:08:03,716 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:08:03,716 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:08:03,763 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:08:03 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:08:03,763 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:08:03,763 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:08:03,763 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:08:03,763 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:08:03,763 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:08:03,763 - httpcore.connection - DEBUG - close.started 2025-05-30 00:08:03,763 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:08:03,855 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:08:03 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:08:03,856 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:08:03,856 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:08:03,856 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:08:03,856 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:08:03,856 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:08:03,856 - httpcore.connection - DEBUG - close.started 2025-05-30 00:08:03,856 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:08:04,087 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:08:04,087 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:08:04,087 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:08:04,087 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:08:04,087 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:08:04,087 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:08:04,087 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:08:04,087 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:08:04,087 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:08:04,452 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:08:04,684 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:12:12,860 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:12:12,860 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:12:12,860 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:12:12,860 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:12:12,860 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:12:12,860 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:12:12,860 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:12:12,860 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:12:12,860 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:12:12,860 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:12:12,864 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:12:12,865 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:12:13,341 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:12:13,341 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:12:13,341 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:12:13,341 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:12:13,341 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:12:13,342 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:12:13,342 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:12:13,342 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:12:13,342 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:12:13,342 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:12:13,342 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:12:13,344 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:12:13,358 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:12:13,365 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:12:13,448 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:12:13,481 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:12:13,482 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:12:13,482 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:12:13,482 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:12:13,483 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:12:13,483 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:12:13,483 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:12:13,483 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:12:13 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:12:13,483 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:12:13,483 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:12:13,483 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:12:13,483 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:12:13,483 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:12:13,483 - httpcore.connection - DEBUG - close.started 2025-05-30 00:12:13,483 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:12:13,484 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:12:13,484 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:12:13,484 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:12:13,484 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:12:13,484 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:12:13,484 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:12:13,484 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:12:13,490 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:12:13 GMT'), (b'server', b'uvicorn'), (b'content-length', b'106328'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:12:13,491 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:12:13,491 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:12:13,491 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:12:13,491 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:12:13,491 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:12:13,491 - httpcore.connection - DEBUG - close.started 2025-05-30 00:12:13,491 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:12:13,502 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:12:13,604 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:12:13,604 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:12:13,643 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:12:13,643 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:12:13,650 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:12:13,887 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:12:13,888 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:12:13,888 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:12:13,888 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:12:13,888 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:12:13,888 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:12:13,919 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:12:13,919 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:12:13,919 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:12:13,919 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:12:13,919 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:12:13,919 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:12:14,033 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:12:14 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:12:14,033 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:12:14,034 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:12:14,034 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:12:14,034 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:12:14,034 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:12:14,034 - httpcore.connection - DEBUG - close.started 2025-05-30 00:12:14,034 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:12:14,060 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:12:14 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:12:14,061 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:12:14,061 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:12:14,061 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:12:14,062 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:12:14,062 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:12:14,062 - httpcore.connection - DEBUG - close.started 2025-05-30 00:12:14,062 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:12:14,141 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:12:14,141 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:12:14,141 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:12:14,141 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:12:14,141 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:12:14,142 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:12:14,142 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:12:14,142 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:12:14,142 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:12:14,655 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:12:14,884 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:13:17,944 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:13:17,944 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:13:17,944 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:13:17,944 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:13:17,944 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:13:17,944 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:13:17,944 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:13:17,944 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:13:17,944 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:13:17,944 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:13:17,948 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:13:17,948 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:13:18,354 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:13:18,354 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:13:18,354 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:13:18,354 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:13:18,354 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:13:18,354 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:13:18,354 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:13:18,354 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:13:18,354 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:13:18,354 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:13:18,354 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:13:18,356 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:13:18,369 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:13:18,375 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:13:18,445 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:13:18,480 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:13:18,481 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:13:18,481 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:13:18,481 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:13:18,481 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:13:18,481 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:13:18,481 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:13:18,481 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:13:18 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:13:18,482 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:13:18,482 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:13:18,482 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:13:18,482 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:13:18,482 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:13:18,482 - httpcore.connection - DEBUG - close.started 2025-05-30 00:13:18,482 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:13:18,482 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:13:18,482 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:13:18,483 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:13:18,483 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:13:18,483 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:13:18,483 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:13:18,483 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:13:18,489 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:13:18 GMT'), (b'server', b'uvicorn'), (b'content-length', b'106650'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:13:18,489 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:13:18,489 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:13:18,489 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:13:18,489 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:13:18,489 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:13:18,489 - httpcore.connection - DEBUG - close.started 2025-05-30 00:13:18,489 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:13:18,500 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:13:18,512 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:13:18,512 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:13:18,637 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:13:18,637 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:13:18,786 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:13:18,787 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:13:18,789 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:13:18,792 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:13:18,792 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:13:18,793 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:13:18,873 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:13:18,912 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:13:18,912 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:13:18,913 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:13:18,913 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:13:18,913 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:13:18,913 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:13:18,927 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:13:18 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:13:18,927 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:13:18,927 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:13:18,927 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:13:18,928 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:13:18,928 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:13:18,928 - httpcore.connection - DEBUG - close.started 2025-05-30 00:13:18,928 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:13:19,052 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:13:19 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:13:19,052 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:13:19,052 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:13:19,053 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:13:19,053 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:13:19,053 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:13:19,053 - httpcore.connection - DEBUG - close.started 2025-05-30 00:13:19,053 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:13:19,638 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:13:19,859 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:14:24,341 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:14:24,341 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:14:24,341 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:14:24,341 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:14:24,341 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:14:24,341 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:14:24,341 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:14:24,341 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:14:24,341 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:14:24,341 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:14:24,345 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:14:24,345 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:14:24,811 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:14:24,811 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:14:24,811 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:14:24,811 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:14:24,811 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:14:24,811 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:14:24,812 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:14:24,812 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:14:24,812 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:14:24,812 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:14:24,812 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:14:24,814 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:14:24,827 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:14:24,833 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:14:24,917 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:14:24,952 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:14:24,952 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:14:24,953 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:14:24,953 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:14:24,953 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:14:24,953 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:14:24,953 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:14:24,953 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:14:24 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:14:24,954 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:14:24,954 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:14:24,954 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:14:24,954 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:14:24,954 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:14:24,954 - httpcore.connection - DEBUG - close.started 2025-05-30 00:14:24,954 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:14:24,954 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:14:24,955 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:14:24,955 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:14:24,955 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:14:24,955 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:14:24,955 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:14:24,955 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:14:24,961 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:14:24 GMT'), (b'server', b'uvicorn'), (b'content-length', b'106654'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:14:24,961 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:14:24,961 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:14:24,961 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:14:24,961 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:14:24,961 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:14:24,961 - httpcore.connection - DEBUG - close.started 2025-05-30 00:14:24,961 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:14:24,972 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:14:24,993 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:14:24,993 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:14:25,111 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:14:25,111 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:14:25,273 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:14:25,273 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:14:25,273 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:14:25,274 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:14:25,274 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:14:25,274 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:14:25,389 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:14:25,389 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:14:25,389 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:14:25,389 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:14:25,389 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:14:25,390 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:14:25,414 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:14:25 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:14:25,414 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:14:25,414 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:14:25,415 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:14:25,415 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:14:25,415 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:14:25,415 - httpcore.connection - DEBUG - close.started 2025-05-30 00:14:25,415 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:14:25,495 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:14:25,530 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:14:25 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:14:25,531 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:14:25,531 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:14:25,532 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:14:25,532 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:14:25,532 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:14:25,532 - httpcore.connection - DEBUG - close.started 2025-05-30 00:14:25,533 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:14:26,127 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:14:26,354 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:14:28,000 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:14:28,001 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:14:28,001 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:14:28,001 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:14:28,001 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:14:28,001 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:14:28,001 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:14:28,001 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:14:28,001 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:16,934 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:16:16,934 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:16:16,934 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:16:16,934 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:16:16,934 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:16:16,934 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:16:16,934 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:16:16,934 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:16:16,934 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:16:16,934 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:16:16,939 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:16:16,939 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:16:17,437 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:16:17,437 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:16:17,437 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:16:17,437 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:16:17,437 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:16:17,437 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:16:17,437 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:16:17,437 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:16:17,437 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:16:17,437 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:16:17,437 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:16:17,439 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:16:17,452 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:16:17,458 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:16:17,529 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:16:17,563 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:16:17,563 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:16:17,563 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:16:17,563 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:16:17,563 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:16:17,564 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:16:17,564 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:16:17,564 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:16:17 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:16:17,564 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:16:17,564 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:16:17,564 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:16:17,564 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:16:17,564 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:16:17,564 - httpcore.connection - DEBUG - close.started 2025-05-30 00:16:17,564 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:16:17,565 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:16:17,565 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:16:17,565 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:16:17,565 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:16:17,565 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:16:17,565 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:16:17,565 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:16:17,571 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:16:17 GMT'), (b'server', b'uvicorn'), (b'content-length', b'106648'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:16:17,571 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:16:17,571 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:16:17,571 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:16:17,571 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:16:17,571 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:16:17,572 - httpcore.connection - DEBUG - close.started 2025-05-30 00:16:17,572 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:16:17,582 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:16:17,653 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:16:17,654 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:16:17,724 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:16:17,724 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:16:17,729 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:16:17,936 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:16:17,936 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:16:17,937 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:16:17,937 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:16:17,937 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:16:17,937 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:16:18,009 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:16:18,009 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:16:18,011 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:16:18,011 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:16:18,011 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:16:18,011 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:16:18,080 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:16:18 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:16:18,080 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:16:18,080 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:16:18,081 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:16:18,081 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:16:18,081 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:16:18,081 - httpcore.connection - DEBUG - close.started 2025-05-30 00:16:18,082 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:16:18,155 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:16:18 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:16:18,156 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:16:18,156 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:16:18,157 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:16:18,157 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:16:18,157 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:16:18,157 - httpcore.connection - DEBUG - close.started 2025-05-30 00:16:18,157 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:16:18,761 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:16:18,778 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:18,778 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:18,778 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:16:18,779 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:16:18,779 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:18,779 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:18,779 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:16:18,779 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:18,779 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:18,982 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:16:30,473 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:30,473 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:30,473 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:16:30,473 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:30,474 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:30,474 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:16:30,474 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:30,474 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:30,474 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:16:30,474 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-30 00:16:30,474 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-30 00:16:30,474 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-30 00:16:30,474 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': None} 2025-05-30 00:16:30,474 - auto_diffusers - INFO - No GPU detected, using CPU-only profile 2025-05-30 00:16:30,475 - auto_diffusers - INFO - Selected optimization profile: cpu_only 2025-05-30 00:16:30,475 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-30 00:16:30,475 - auto_diffusers - DEBUG - Prompt length: 7566 characters 2025-05-30 00:16:30,475 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:16:30,475 - auto_diffusers - INFO - PROMPT SENT TO GEMINI API: 2025-05-30 00:16:30,475 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:16:30,475 - auto_diffusers - INFO - You are an expert in optimizing diffusers library code for different hardware configurations. NOTE: This system includes curated optimization knowledge from HuggingFace documentation. TASK: Generate optimized Python code for running a diffusion model with the following specifications: - Model: black-forest-labs/FLUX.1-schnell - Prompt: "A cat holding a sign that says hello world" - Image size: 768x1360 - Inference steps: 4 HARDWARE SPECIFICATIONS: - Platform: Linux (manual_input) - CPU Cores: 8 - CUDA Available: False - MPS Available: False - Optimization Profile: cpu_only MEMORY ANALYSIS: - Model Memory Requirements: 36.0 GB (FP16 inference) - Model Weights Size: 24.0 GB (FP16) - Memory Recommendation: 🔄 Requires sequential CPU offloading - Recommended Precision: float16 - Attention Slicing Recommended: True - VAE Slicing Recommended: True OPTIMIZATION KNOWLEDGE BASE: # DIFFUSERS OPTIMIZATION TECHNIQUES ## Memory Optimization Techniques ### 1. Model CPU Offloading Use `enable_model_cpu_offload()` to move models between GPU and CPU automatically: ```python pipe.enable_model_cpu_offload() ``` - Saves significant VRAM by keeping only active models on GPU - Automatic management, no manual intervention needed - Compatible with all pipelines ### 2. Sequential CPU Offloading Use `enable_sequential_cpu_offload()` for more aggressive memory saving: ```python pipe.enable_sequential_cpu_offload() ``` - More memory efficient than model offloading - Moves models to CPU after each forward pass - Best for very limited VRAM scenarios ### 3. Attention Slicing Use `enable_attention_slicing()` to reduce memory during attention computation: ```python pipe.enable_attention_slicing() # or specify slice size pipe.enable_attention_slicing("max") # maximum slicing pipe.enable_attention_slicing(1) # slice_size = 1 ``` - Trades compute time for memory - Most effective for high-resolution images - Can be combined with other techniques ### 4. VAE Slicing Use `enable_vae_slicing()` for large batch processing: ```python pipe.enable_vae_slicing() ``` - Decodes images one at a time instead of all at once - Essential for batch sizes > 4 - Minimal performance impact on single images ### 5. VAE Tiling Use `enable_vae_tiling()` for high-resolution image generation: ```python pipe.enable_vae_tiling() ``` - Enables 4K+ image generation on 8GB VRAM - Splits images into overlapping tiles - Automatically disabled for 512x512 or smaller images ### 6. Memory Efficient Attention (xFormers) Use `enable_xformers_memory_efficient_attention()` if xFormers is installed: ```python pipe.enable_xformers_memory_efficient_attention() ``` - Significantly reduces memory usage and improves speed - Requires xformers library installation - Compatible with most models ## Performance Optimization Techniques ### 1. Half Precision (FP16/BF16) Use lower precision for better memory and speed: ```python # FP16 (widely supported) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) # BF16 (better numerical stability, newer hardware) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) ``` - FP16: Halves memory usage, widely supported - BF16: Better numerical stability, requires newer GPUs - Essential for most optimization scenarios ### 2. Torch Compile (PyTorch 2.0+) Use `torch.compile()` for significant speed improvements: ```python pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) # For some models, compile VAE too: pipe.vae.decode = torch.compile(pipe.vae.decode, mode="reduce-overhead", fullgraph=True) ``` - 5-50% speed improvement - Requires PyTorch 2.0+ - First run is slower due to compilation ### 3. Fast Schedulers Use faster schedulers for fewer steps: ```python from diffusers import LMSDiscreteScheduler, UniPCMultistepScheduler # LMS Scheduler (good quality, fast) pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) # UniPC Scheduler (fastest) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` ## Hardware-Specific Optimizations ### NVIDIA GPU Optimizations ```python # Enable Tensor Cores torch.backends.cudnn.benchmark = True # Optimal data type for NVIDIA torch_dtype = torch.float16 # or torch.bfloat16 for RTX 30/40 series ``` ### Apple Silicon (MPS) Optimizations ```python # Use MPS device device = "mps" if torch.backends.mps.is_available() else "cpu" pipe = pipe.to(device) # Recommended dtype for Apple Silicon torch_dtype = torch.bfloat16 # Better than float16 on Apple Silicon # Attention slicing often helps on MPS pipe.enable_attention_slicing() ``` ### CPU Optimizations ```python # Use float32 for CPU torch_dtype = torch.float32 # Enable optimized attention pipe.enable_attention_slicing() ``` ## Model-Specific Guidelines ### FLUX Models - Do NOT use guidance_scale parameter (not needed for FLUX) - Use 4-8 inference steps maximum - BF16 dtype recommended - Enable attention slicing for memory optimization ### Stable Diffusion XL - Enable attention slicing for high resolutions - Use refiner model sparingly to save memory - Consider VAE tiling for >1024px images ### Stable Diffusion 1.5/2.1 - Very memory efficient base models - Can often run without optimizations on 8GB+ VRAM - Enable VAE slicing for batch processing ## Memory Usage Estimation - FLUX.1: ~24GB for full precision, ~12GB for FP16 - SDXL: ~7GB for FP16, ~14GB for FP32 - SD 1.5: ~2GB for FP16, ~4GB for FP32 ## Optimization Combinations by VRAM ### 24GB+ VRAM (High-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` ### 12-24GB VRAM (Mid-range) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_model_cpu_offload() pipe.enable_xformers_memory_efficient_attention() ``` ### 8-12GB VRAM (Entry-level) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing() pipe.enable_vae_slicing() pipe.enable_xformers_memory_efficient_attention() ``` ### <8GB VRAM (Low-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing("max") pipe.enable_vae_slicing() pipe.enable_vae_tiling() ``` IMPORTANT: For FLUX.1-schnell models, do NOT include guidance_scale parameter as it's not needed. Using the OPTIMIZATION KNOWLEDGE BASE above, generate Python code that: 1. **Selects the best optimization techniques** for the specific hardware profile 2. **Applies appropriate memory optimizations** based on available VRAM 3. **Uses optimal data types** for the target hardware: - User specified dtype (if provided): Use exactly as specified - Apple Silicon (MPS): prefer torch.bfloat16 - NVIDIA GPUs: prefer torch.float16 or torch.bfloat16 - CPU only: use torch.float32 4. **Implements hardware-specific optimizations** (CUDA, MPS, CPU) 5. **Follows model-specific guidelines** (e.g., FLUX guidance_scale handling) IMPORTANT GUIDELINES: - Reference the OPTIMIZATION KNOWLEDGE BASE to select appropriate techniques - Include all necessary imports - Add brief comments explaining optimization choices - Generate compact, production-ready code - Inline values where possible for concise code - Generate ONLY the Python code, no explanations before or after the code block 2025-05-30 00:16:30,475 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:16:30,475 - auto_diffusers - INFO - Sending request to Gemini API 2025-05-30 00:16:45,777 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-30 00:16:45,777 - auto_diffusers - DEBUG - Response length: 2393 characters 2025-05-30 00:20:26,203 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:20:26,203 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:20:26,203 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:20:26,203 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:20:26,203 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:20:26,203 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:20:26,203 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:20:26,204 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:20:26,204 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:20:26,204 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:20:26,207 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:20:26,208 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:20:26,674 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:20:26,674 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:20:26,674 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:20:26,674 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:20:26,674 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:20:26,674 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:20:26,674 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:20:26,674 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:20:26,674 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:20:26,674 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:20:26,674 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:20:26,676 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:20:26,689 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:20:26,695 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:20:26,775 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:20:26,809 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:20:26,809 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:20:26,810 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:20:26,810 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:20:26,810 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:20:26,810 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:20:26,810 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:20:26,810 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:20:26 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:20:26,810 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:20:26,811 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:20:26,811 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:20:26,811 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:20:26,811 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:20:26,811 - httpcore.connection - DEBUG - close.started 2025-05-30 00:20:26,811 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:20:26,811 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:20:26,812 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:20:26,812 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:20:26,812 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:20:26,812 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:20:26,812 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:20:26,812 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:20:26,818 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:20:26 GMT'), (b'server', b'uvicorn'), (b'content-length', b'104665'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:20:26,818 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:20:26,818 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:20:26,818 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:20:26,819 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:20:26,819 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:20:26,819 - httpcore.connection - DEBUG - close.started 2025-05-30 00:20:26,819 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:20:26,829 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:20:26,983 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:20:27,005 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:20:27,005 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:20:27,005 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:20:27,005 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:20:27,291 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:20:27,291 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:20:27,291 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:20:27,292 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:20:27,292 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:20:27,292 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:20:27,292 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:20:27,292 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:20:27,292 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:20:27,293 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:20:27,293 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:20:27,293 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:20:27,437 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:20:27 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:20:27,437 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:20:27,437 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:20:27,438 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:20:27,438 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:20:27,438 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:20:27,438 - httpcore.connection - DEBUG - close.started 2025-05-30 00:20:27,439 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:20:27 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:20:27,439 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:20:27,439 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:20:27,439 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:20:27,440 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:20:27,440 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:20:27,440 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:20:27,440 - httpcore.connection - DEBUG - close.started 2025-05-30 00:20:27,440 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:20:28,103 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:20:28,371 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:20:28,686 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:28,686 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:28,686 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:20:28,686 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:20:28,686 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:28,687 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:28,687 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:20:28,687 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:28,687 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:30,112 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:30,112 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:30,112 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:20:30,112 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:30,112 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:30,112 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:20:30,112 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:30,112 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:30,112 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:20:30,112 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-30 00:20:30,112 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-30 00:20:30,112 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-30 00:20:30,112 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': None} 2025-05-30 00:20:30,112 - auto_diffusers - INFO - No GPU detected, using CPU-only profile 2025-05-30 00:20:30,112 - auto_diffusers - INFO - Selected optimization profile: cpu_only 2025-05-30 00:20:30,112 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-30 00:20:30,112 - auto_diffusers - DEBUG - Prompt length: 7566 characters 2025-05-30 00:20:30,112 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:20:30,112 - auto_diffusers - INFO - PROMPT SENT TO GEMINI API: 2025-05-30 00:20:30,112 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:20:30,112 - auto_diffusers - INFO - You are an expert in optimizing diffusers library code for different hardware configurations. NOTE: This system includes curated optimization knowledge from HuggingFace documentation. TASK: Generate optimized Python code for running a diffusion model with the following specifications: - Model: black-forest-labs/FLUX.1-schnell - Prompt: "A cat holding a sign that says hello world" - Image size: 768x1360 - Inference steps: 4 HARDWARE SPECIFICATIONS: - Platform: Linux (manual_input) - CPU Cores: 8 - CUDA Available: False - MPS Available: False - Optimization Profile: cpu_only MEMORY ANALYSIS: - Model Memory Requirements: 36.0 GB (FP16 inference) - Model Weights Size: 24.0 GB (FP16) - Memory Recommendation: 🔄 Requires sequential CPU offloading - Recommended Precision: float16 - Attention Slicing Recommended: True - VAE Slicing Recommended: True OPTIMIZATION KNOWLEDGE BASE: # DIFFUSERS OPTIMIZATION TECHNIQUES ## Memory Optimization Techniques ### 1. Model CPU Offloading Use `enable_model_cpu_offload()` to move models between GPU and CPU automatically: ```python pipe.enable_model_cpu_offload() ``` - Saves significant VRAM by keeping only active models on GPU - Automatic management, no manual intervention needed - Compatible with all pipelines ### 2. Sequential CPU Offloading Use `enable_sequential_cpu_offload()` for more aggressive memory saving: ```python pipe.enable_sequential_cpu_offload() ``` - More memory efficient than model offloading - Moves models to CPU after each forward pass - Best for very limited VRAM scenarios ### 3. Attention Slicing Use `enable_attention_slicing()` to reduce memory during attention computation: ```python pipe.enable_attention_slicing() # or specify slice size pipe.enable_attention_slicing("max") # maximum slicing pipe.enable_attention_slicing(1) # slice_size = 1 ``` - Trades compute time for memory - Most effective for high-resolution images - Can be combined with other techniques ### 4. VAE Slicing Use `enable_vae_slicing()` for large batch processing: ```python pipe.enable_vae_slicing() ``` - Decodes images one at a time instead of all at once - Essential for batch sizes > 4 - Minimal performance impact on single images ### 5. VAE Tiling Use `enable_vae_tiling()` for high-resolution image generation: ```python pipe.enable_vae_tiling() ``` - Enables 4K+ image generation on 8GB VRAM - Splits images into overlapping tiles - Automatically disabled for 512x512 or smaller images ### 6. Memory Efficient Attention (xFormers) Use `enable_xformers_memory_efficient_attention()` if xFormers is installed: ```python pipe.enable_xformers_memory_efficient_attention() ``` - Significantly reduces memory usage and improves speed - Requires xformers library installation - Compatible with most models ## Performance Optimization Techniques ### 1. Half Precision (FP16/BF16) Use lower precision for better memory and speed: ```python # FP16 (widely supported) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) # BF16 (better numerical stability, newer hardware) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) ``` - FP16: Halves memory usage, widely supported - BF16: Better numerical stability, requires newer GPUs - Essential for most optimization scenarios ### 2. Torch Compile (PyTorch 2.0+) Use `torch.compile()` for significant speed improvements: ```python pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) # For some models, compile VAE too: pipe.vae.decode = torch.compile(pipe.vae.decode, mode="reduce-overhead", fullgraph=True) ``` - 5-50% speed improvement - Requires PyTorch 2.0+ - First run is slower due to compilation ### 3. Fast Schedulers Use faster schedulers for fewer steps: ```python from diffusers import LMSDiscreteScheduler, UniPCMultistepScheduler # LMS Scheduler (good quality, fast) pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) # UniPC Scheduler (fastest) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` ## Hardware-Specific Optimizations ### NVIDIA GPU Optimizations ```python # Enable Tensor Cores torch.backends.cudnn.benchmark = True # Optimal data type for NVIDIA torch_dtype = torch.float16 # or torch.bfloat16 for RTX 30/40 series ``` ### Apple Silicon (MPS) Optimizations ```python # Use MPS device device = "mps" if torch.backends.mps.is_available() else "cpu" pipe = pipe.to(device) # Recommended dtype for Apple Silicon torch_dtype = torch.bfloat16 # Better than float16 on Apple Silicon # Attention slicing often helps on MPS pipe.enable_attention_slicing() ``` ### CPU Optimizations ```python # Use float32 for CPU torch_dtype = torch.float32 # Enable optimized attention pipe.enable_attention_slicing() ``` ## Model-Specific Guidelines ### FLUX Models - Do NOT use guidance_scale parameter (not needed for FLUX) - Use 4-8 inference steps maximum - BF16 dtype recommended - Enable attention slicing for memory optimization ### Stable Diffusion XL - Enable attention slicing for high resolutions - Use refiner model sparingly to save memory - Consider VAE tiling for >1024px images ### Stable Diffusion 1.5/2.1 - Very memory efficient base models - Can often run without optimizations on 8GB+ VRAM - Enable VAE slicing for batch processing ## Memory Usage Estimation - FLUX.1: ~24GB for full precision, ~12GB for FP16 - SDXL: ~7GB for FP16, ~14GB for FP32 - SD 1.5: ~2GB for FP16, ~4GB for FP32 ## Optimization Combinations by VRAM ### 24GB+ VRAM (High-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` ### 12-24GB VRAM (Mid-range) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_model_cpu_offload() pipe.enable_xformers_memory_efficient_attention() ``` ### 8-12GB VRAM (Entry-level) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing() pipe.enable_vae_slicing() pipe.enable_xformers_memory_efficient_attention() ``` ### <8GB VRAM (Low-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing("max") pipe.enable_vae_slicing() pipe.enable_vae_tiling() ``` IMPORTANT: For FLUX.1-schnell models, do NOT include guidance_scale parameter as it's not needed. Using the OPTIMIZATION KNOWLEDGE BASE above, generate Python code that: 1. **Selects the best optimization techniques** for the specific hardware profile 2. **Applies appropriate memory optimizations** based on available VRAM 3. **Uses optimal data types** for the target hardware: - User specified dtype (if provided): Use exactly as specified - Apple Silicon (MPS): prefer torch.bfloat16 - NVIDIA GPUs: prefer torch.float16 or torch.bfloat16 - CPU only: use torch.float32 4. **Implements hardware-specific optimizations** (CUDA, MPS, CPU) 5. **Follows model-specific guidelines** (e.g., FLUX guidance_scale handling) IMPORTANT GUIDELINES: - Reference the OPTIMIZATION KNOWLEDGE BASE to select appropriate techniques - Include all necessary imports - Add brief comments explaining optimization choices - Generate compact, production-ready code - Inline values where possible for concise code - Generate ONLY the Python code, no explanations before or after the code block 2025-05-30 00:20:30,113 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:20:30,113 - auto_diffusers - INFO - Sending request to Gemini API 2025-05-30 00:20:43,867 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-30 00:20:43,868 - auto_diffusers - DEBUG - Response length: 1716 characters 2025-05-30 00:23:06,277 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:23:06,277 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:23:06,277 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:23:06,277 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:23:06,277 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:23:06,277 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:23:06,277 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:23:06,277 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:23:06,277 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:23:06,277 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:23:06,281 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:23:06,281 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:23:06,749 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:23:06,749 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:23:06,749 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:23:06,749 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:23:06,749 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:23:06,749 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:23:06,750 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:23:06,750 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:23:06,750 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:23:06,750 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:23:06,750 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:23:06,752 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:23:06,764 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:23:06,770 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:23:06,840 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:23:06,876 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:23:06,877 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:23:06,877 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:23:06,877 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:23:06,877 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:23:06,877 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:23:06,877 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:23:06,878 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:23:06 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:23:06,878 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:23:06,878 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:23:06,878 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:23:06,878 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:23:06,878 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:23:06,878 - httpcore.connection - DEBUG - close.started 2025-05-30 00:23:06,878 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:23:06,879 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:23:06,879 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:23:06,879 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:23:06,879 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:23:06,879 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:23:06,879 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:23:06,879 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:23:06,885 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:23:06 GMT'), (b'server', b'uvicorn'), (b'content-length', b'105144'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:23:06,885 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:23:06,885 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:23:06,885 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:23:06,885 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:23:06,885 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:23:06,885 - httpcore.connection - DEBUG - close.started 2025-05-30 00:23:06,885 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:23:06,896 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:23:06,961 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:23:06,961 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:23:07,038 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:23:07,038 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:23:07,047 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:23:07,239 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:23:07,240 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:23:07,240 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:23:07,240 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:23:07,240 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:23:07,240 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:23:07,323 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:23:07,324 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:23:07,324 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:23:07,324 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:23:07,324 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:23:07,325 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:23:07,379 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:23:07 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:23:07,380 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:23:07,380 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:23:07,380 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:23:07,381 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:23:07,381 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:23:07,381 - httpcore.connection - DEBUG - close.started 2025-05-30 00:23:07,381 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:23:07,469 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:23:07 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:23:07,471 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:23:07,471 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:23:07,476 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:23:07,477 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:23:07,477 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:23:07,477 - httpcore.connection - DEBUG - close.started 2025-05-30 00:23:07,478 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:23:08,047 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:08,048 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:08,048 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:23:08,048 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:23:08,048 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:08,048 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:08,048 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:23:08,048 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:08,049 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:08,068 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:23:08,293 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:23:09,169 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:09,169 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:09,169 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:23:09,169 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:09,169 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:09,169 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:23:09,169 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:09,169 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:09,169 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:23:09,169 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-30 00:23:09,169 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-30 00:23:09,169 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-30 00:23:09,169 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': None} 2025-05-30 00:23:09,169 - auto_diffusers - INFO - No GPU detected, using CPU-only profile 2025-05-30 00:23:09,169 - auto_diffusers - INFO - Selected optimization profile: cpu_only 2025-05-30 00:23:09,169 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-30 00:23:09,169 - auto_diffusers - DEBUG - Prompt length: 7566 characters 2025-05-30 00:23:09,169 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:23:09,169 - auto_diffusers - INFO - PROMPT SENT TO GEMINI API: 2025-05-30 00:23:09,169 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:23:09,169 - auto_diffusers - INFO - You are an expert in optimizing diffusers library code for different hardware configurations. NOTE: This system includes curated optimization knowledge from HuggingFace documentation. TASK: Generate optimized Python code for running a diffusion model with the following specifications: - Model: black-forest-labs/FLUX.1-schnell - Prompt: "A cat holding a sign that says hello world" - Image size: 768x1360 - Inference steps: 4 HARDWARE SPECIFICATIONS: - Platform: Linux (manual_input) - CPU Cores: 8 - CUDA Available: False - MPS Available: False - Optimization Profile: cpu_only MEMORY ANALYSIS: - Model Memory Requirements: 36.0 GB (FP16 inference) - Model Weights Size: 24.0 GB (FP16) - Memory Recommendation: 🔄 Requires sequential CPU offloading - Recommended Precision: float16 - Attention Slicing Recommended: True - VAE Slicing Recommended: True OPTIMIZATION KNOWLEDGE BASE: # DIFFUSERS OPTIMIZATION TECHNIQUES ## Memory Optimization Techniques ### 1. Model CPU Offloading Use `enable_model_cpu_offload()` to move models between GPU and CPU automatically: ```python pipe.enable_model_cpu_offload() ``` - Saves significant VRAM by keeping only active models on GPU - Automatic management, no manual intervention needed - Compatible with all pipelines ### 2. Sequential CPU Offloading Use `enable_sequential_cpu_offload()` for more aggressive memory saving: ```python pipe.enable_sequential_cpu_offload() ``` - More memory efficient than model offloading - Moves models to CPU after each forward pass - Best for very limited VRAM scenarios ### 3. Attention Slicing Use `enable_attention_slicing()` to reduce memory during attention computation: ```python pipe.enable_attention_slicing() # or specify slice size pipe.enable_attention_slicing("max") # maximum slicing pipe.enable_attention_slicing(1) # slice_size = 1 ``` - Trades compute time for memory - Most effective for high-resolution images - Can be combined with other techniques ### 4. VAE Slicing Use `enable_vae_slicing()` for large batch processing: ```python pipe.enable_vae_slicing() ``` - Decodes images one at a time instead of all at once - Essential for batch sizes > 4 - Minimal performance impact on single images ### 5. VAE Tiling Use `enable_vae_tiling()` for high-resolution image generation: ```python pipe.enable_vae_tiling() ``` - Enables 4K+ image generation on 8GB VRAM - Splits images into overlapping tiles - Automatically disabled for 512x512 or smaller images ### 6. Memory Efficient Attention (xFormers) Use `enable_xformers_memory_efficient_attention()` if xFormers is installed: ```python pipe.enable_xformers_memory_efficient_attention() ``` - Significantly reduces memory usage and improves speed - Requires xformers library installation - Compatible with most models ## Performance Optimization Techniques ### 1. Half Precision (FP16/BF16) Use lower precision for better memory and speed: ```python # FP16 (widely supported) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) # BF16 (better numerical stability, newer hardware) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) ``` - FP16: Halves memory usage, widely supported - BF16: Better numerical stability, requires newer GPUs - Essential for most optimization scenarios ### 2. Torch Compile (PyTorch 2.0+) Use `torch.compile()` for significant speed improvements: ```python pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) # For some models, compile VAE too: pipe.vae.decode = torch.compile(pipe.vae.decode, mode="reduce-overhead", fullgraph=True) ``` - 5-50% speed improvement - Requires PyTorch 2.0+ - First run is slower due to compilation ### 3. Fast Schedulers Use faster schedulers for fewer steps: ```python from diffusers import LMSDiscreteScheduler, UniPCMultistepScheduler # LMS Scheduler (good quality, fast) pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) # UniPC Scheduler (fastest) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` ## Hardware-Specific Optimizations ### NVIDIA GPU Optimizations ```python # Enable Tensor Cores torch.backends.cudnn.benchmark = True # Optimal data type for NVIDIA torch_dtype = torch.float16 # or torch.bfloat16 for RTX 30/40 series ``` ### Apple Silicon (MPS) Optimizations ```python # Use MPS device device = "mps" if torch.backends.mps.is_available() else "cpu" pipe = pipe.to(device) # Recommended dtype for Apple Silicon torch_dtype = torch.bfloat16 # Better than float16 on Apple Silicon # Attention slicing often helps on MPS pipe.enable_attention_slicing() ``` ### CPU Optimizations ```python # Use float32 for CPU torch_dtype = torch.float32 # Enable optimized attention pipe.enable_attention_slicing() ``` ## Model-Specific Guidelines ### FLUX Models - Do NOT use guidance_scale parameter (not needed for FLUX) - Use 4-8 inference steps maximum - BF16 dtype recommended - Enable attention slicing for memory optimization ### Stable Diffusion XL - Enable attention slicing for high resolutions - Use refiner model sparingly to save memory - Consider VAE tiling for >1024px images ### Stable Diffusion 1.5/2.1 - Very memory efficient base models - Can often run without optimizations on 8GB+ VRAM - Enable VAE slicing for batch processing ## Memory Usage Estimation - FLUX.1: ~24GB for full precision, ~12GB for FP16 - SDXL: ~7GB for FP16, ~14GB for FP32 - SD 1.5: ~2GB for FP16, ~4GB for FP32 ## Optimization Combinations by VRAM ### 24GB+ VRAM (High-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` ### 12-24GB VRAM (Mid-range) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_model_cpu_offload() pipe.enable_xformers_memory_efficient_attention() ``` ### 8-12GB VRAM (Entry-level) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing() pipe.enable_vae_slicing() pipe.enable_xformers_memory_efficient_attention() ``` ### <8GB VRAM (Low-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing("max") pipe.enable_vae_slicing() pipe.enable_vae_tiling() ``` IMPORTANT: For FLUX.1-schnell models, do NOT include guidance_scale parameter as it's not needed. Using the OPTIMIZATION KNOWLEDGE BASE above, generate Python code that: 1. **Selects the best optimization techniques** for the specific hardware profile 2. **Applies appropriate memory optimizations** based on available VRAM 3. **Uses optimal data types** for the target hardware: - User specified dtype (if provided): Use exactly as specified - Apple Silicon (MPS): prefer torch.bfloat16 - NVIDIA GPUs: prefer torch.float16 or torch.bfloat16 - CPU only: use torch.float32 4. **Implements hardware-specific optimizations** (CUDA, MPS, CPU) 5. **Follows model-specific guidelines** (e.g., FLUX guidance_scale handling) IMPORTANT GUIDELINES: - Reference the OPTIMIZATION KNOWLEDGE BASE to select appropriate techniques - Include all necessary imports - Add brief comments explaining optimization choices - Generate compact, production-ready code - Inline values where possible for concise code - Generate ONLY the Python code, no explanations before or after the code block 2025-05-30 00:23:09,170 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:23:09,170 - auto_diffusers - INFO - Sending request to Gemini API 2025-05-30 00:23:21,834 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-30 00:23:21,836 - auto_diffusers - DEBUG - Response length: 1661 characters 2025-05-30 00:27:11,690 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:27:11,690 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:27:11,690 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:27:11,690 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:27:11,690 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:27:11,690 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:27:11,690 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:27:11,690 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:27:11,690 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:27:11,690 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:27:11,695 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:27:11,695 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:27:12,187 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:27:12,187 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:27:12,187 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:27:12,187 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:27:12,187 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:27:12,187 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:27:12,187 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:27:12,187 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:27:12,187 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:27:12,188 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:27:12,188 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:27:12,190 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:27:12,203 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:27:12,207 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:27:12,282 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:27:12,315 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:27:12,315 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:27:12,315 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:27:12,316 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:27:12,316 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:27:12,316 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:27:12,316 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:27:12,316 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:27:12 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:27:12,316 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:27:12,316 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:27:12,316 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:27:12,316 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:27:12,316 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:27:12,316 - httpcore.connection - DEBUG - close.started 2025-05-30 00:27:12,317 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:27:12,317 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:27:12,317 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:27:12,317 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:27:12,318 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:27:12,318 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:27:12,318 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:27:12,318 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:27:12,324 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:27:12 GMT'), (b'server', b'uvicorn'), (b'content-length', b'105338'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:27:12,324 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:27:12,324 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:27:12,324 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:27:12,324 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:27:12,325 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:27:12,325 - httpcore.connection - DEBUG - close.started 2025-05-30 00:27:12,325 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:27:12,335 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:27:12,366 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:27:12,366 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:27:12,478 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:27:12,478 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:27:12,484 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:27:12,648 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:27:12,649 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:27:12,649 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:27:12,649 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:27:12,649 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:27:12,649 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:27:12,762 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:27:12,762 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:27:12,762 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:27:12,762 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:27:12,762 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:27:12,762 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:27:12,793 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:27:12 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:27:12,793 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:27:12,793 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:27:12,793 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:27:12,793 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:27:12,793 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:27:12,793 - httpcore.connection - DEBUG - close.started 2025-05-30 00:27:12,793 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:27:12,907 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:27:12 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:27:12,907 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:27:12,908 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:27:12,908 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:27:12,908 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:27:12,909 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:27:12,909 - httpcore.connection - DEBUG - close.started 2025-05-30 00:27:12,909 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:27:13,500 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:27:13,583 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:27:13,583 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:27:13,583 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:27:13,584 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:27:13,584 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:27:13,584 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:27:13,584 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:27:13,584 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:27:13,584 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:27:13,720 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:30:00,106 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:30:00,106 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:30:00,106 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:30:00,106 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:30:00,106 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:30:00,106 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:30:00,106 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:30:00,106 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:30:00,106 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:30:00,106 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:30:00,110 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:30:00,111 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:30:00,561 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:30:00,561 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:30:00,561 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:30:00,561 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:30:00,561 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:30:00,561 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:30:00,561 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:30:00,561 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:30:00,561 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:30:00,561 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:30:00,561 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:30:00,563 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:30:00,576 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:30:00,582 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:30:00,660 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:30:00,699 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:30:00,699 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:30:00,699 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:30:00,699 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:30:00,700 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:30:00,700 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:30:00,700 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:30:00,700 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:30:00 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:30:00,701 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:30:00,701 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:30:00,701 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:30:00,701 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:30:00,701 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:30:00,701 - httpcore.connection - DEBUG - close.started 2025-05-30 00:30:00,701 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:30:00,701 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:30:00,703 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:30:00,704 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:30:00,704 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:30:00,704 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:30:00,704 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:30:00,704 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:30:00,713 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:30:00 GMT'), (b'server', b'uvicorn'), (b'content-length', b'105821'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:30:00,713 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:30:00,713 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:30:00,713 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:30:00,713 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:30:00,713 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:30:00,713 - httpcore.connection - DEBUG - close.started 2025-05-30 00:30:00,713 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:30:00,723 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:30:00,737 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:30:00,737 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:30:00,857 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:30:00,882 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:30:00,882 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:30:01,011 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:30:01,012 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:30:01,012 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:30:01,012 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:30:01,013 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:30:01,013 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:30:01,152 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:30:01 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:30:01,152 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:30:01,152 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:30:01,152 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:30:01,152 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:30:01,152 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:30:01,152 - httpcore.connection - DEBUG - close.started 2025-05-30 00:30:01,152 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:30:01,200 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:30:01,200 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:30:01,200 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:30:01,200 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:30:01,200 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:30:01,200 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:30:01,362 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:30:01 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:30:01,362 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:30:01,362 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:30:01,363 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:30:01,363 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:30:01,363 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:30:01,363 - httpcore.connection - DEBUG - close.started 2025-05-30 00:30:01,363 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:30:01,967 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:30:01,967 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:01,968 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:01,968 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:30:01,968 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:30:01,968 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:01,968 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:01,968 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:30:01,968 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:01,968 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:02,190 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:30:05,722 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:05,722 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:05,723 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:30:05,723 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:05,723 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:05,723 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:30:05,723 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:05,723 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:05,723 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:30:05,724 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-30 00:30:05,724 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-30 00:30:05,724 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-30 00:30:05,724 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': None} 2025-05-30 00:30:05,724 - auto_diffusers - INFO - No GPU detected, using CPU-only profile 2025-05-30 00:30:05,724 - auto_diffusers - INFO - Selected optimization profile: cpu_only 2025-05-30 00:30:05,724 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-30 00:30:05,725 - auto_diffusers - DEBUG - Prompt length: 7566 characters 2025-05-30 00:30:05,725 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:30:05,725 - auto_diffusers - INFO - PROMPT SENT TO GEMINI API: 2025-05-30 00:30:05,725 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:30:05,725 - auto_diffusers - INFO - You are an expert in optimizing diffusers library code for different hardware configurations. NOTE: This system includes curated optimization knowledge from HuggingFace documentation. TASK: Generate optimized Python code for running a diffusion model with the following specifications: - Model: black-forest-labs/FLUX.1-schnell - Prompt: "A cat holding a sign that says hello world" - Image size: 768x1360 - Inference steps: 4 HARDWARE SPECIFICATIONS: - Platform: Linux (manual_input) - CPU Cores: 8 - CUDA Available: False - MPS Available: False - Optimization Profile: cpu_only MEMORY ANALYSIS: - Model Memory Requirements: 36.0 GB (FP16 inference) - Model Weights Size: 24.0 GB (FP16) - Memory Recommendation: 🔄 Requires sequential CPU offloading - Recommended Precision: float16 - Attention Slicing Recommended: True - VAE Slicing Recommended: True OPTIMIZATION KNOWLEDGE BASE: # DIFFUSERS OPTIMIZATION TECHNIQUES ## Memory Optimization Techniques ### 1. Model CPU Offloading Use `enable_model_cpu_offload()` to move models between GPU and CPU automatically: ```python pipe.enable_model_cpu_offload() ``` - Saves significant VRAM by keeping only active models on GPU - Automatic management, no manual intervention needed - Compatible with all pipelines ### 2. Sequential CPU Offloading Use `enable_sequential_cpu_offload()` for more aggressive memory saving: ```python pipe.enable_sequential_cpu_offload() ``` - More memory efficient than model offloading - Moves models to CPU after each forward pass - Best for very limited VRAM scenarios ### 3. Attention Slicing Use `enable_attention_slicing()` to reduce memory during attention computation: ```python pipe.enable_attention_slicing() # or specify slice size pipe.enable_attention_slicing("max") # maximum slicing pipe.enable_attention_slicing(1) # slice_size = 1 ``` - Trades compute time for memory - Most effective for high-resolution images - Can be combined with other techniques ### 4. VAE Slicing Use `enable_vae_slicing()` for large batch processing: ```python pipe.enable_vae_slicing() ``` - Decodes images one at a time instead of all at once - Essential for batch sizes > 4 - Minimal performance impact on single images ### 5. VAE Tiling Use `enable_vae_tiling()` for high-resolution image generation: ```python pipe.enable_vae_tiling() ``` - Enables 4K+ image generation on 8GB VRAM - Splits images into overlapping tiles - Automatically disabled for 512x512 or smaller images ### 6. Memory Efficient Attention (xFormers) Use `enable_xformers_memory_efficient_attention()` if xFormers is installed: ```python pipe.enable_xformers_memory_efficient_attention() ``` - Significantly reduces memory usage and improves speed - Requires xformers library installation - Compatible with most models ## Performance Optimization Techniques ### 1. Half Precision (FP16/BF16) Use lower precision for better memory and speed: ```python # FP16 (widely supported) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) # BF16 (better numerical stability, newer hardware) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) ``` - FP16: Halves memory usage, widely supported - BF16: Better numerical stability, requires newer GPUs - Essential for most optimization scenarios ### 2. Torch Compile (PyTorch 2.0+) Use `torch.compile()` for significant speed improvements: ```python pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) # For some models, compile VAE too: pipe.vae.decode = torch.compile(pipe.vae.decode, mode="reduce-overhead", fullgraph=True) ``` - 5-50% speed improvement - Requires PyTorch 2.0+ - First run is slower due to compilation ### 3. Fast Schedulers Use faster schedulers for fewer steps: ```python from diffusers import LMSDiscreteScheduler, UniPCMultistepScheduler # LMS Scheduler (good quality, fast) pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) # UniPC Scheduler (fastest) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` ## Hardware-Specific Optimizations ### NVIDIA GPU Optimizations ```python # Enable Tensor Cores torch.backends.cudnn.benchmark = True # Optimal data type for NVIDIA torch_dtype = torch.float16 # or torch.bfloat16 for RTX 30/40 series ``` ### Apple Silicon (MPS) Optimizations ```python # Use MPS device device = "mps" if torch.backends.mps.is_available() else "cpu" pipe = pipe.to(device) # Recommended dtype for Apple Silicon torch_dtype = torch.bfloat16 # Better than float16 on Apple Silicon # Attention slicing often helps on MPS pipe.enable_attention_slicing() ``` ### CPU Optimizations ```python # Use float32 for CPU torch_dtype = torch.float32 # Enable optimized attention pipe.enable_attention_slicing() ``` ## Model-Specific Guidelines ### FLUX Models - Do NOT use guidance_scale parameter (not needed for FLUX) - Use 4-8 inference steps maximum - BF16 dtype recommended - Enable attention slicing for memory optimization ### Stable Diffusion XL - Enable attention slicing for high resolutions - Use refiner model sparingly to save memory - Consider VAE tiling for >1024px images ### Stable Diffusion 1.5/2.1 - Very memory efficient base models - Can often run without optimizations on 8GB+ VRAM - Enable VAE slicing for batch processing ## Memory Usage Estimation - FLUX.1: ~24GB for full precision, ~12GB for FP16 - SDXL: ~7GB for FP16, ~14GB for FP32 - SD 1.5: ~2GB for FP16, ~4GB for FP32 ## Optimization Combinations by VRAM ### 24GB+ VRAM (High-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` ### 12-24GB VRAM (Mid-range) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_model_cpu_offload() pipe.enable_xformers_memory_efficient_attention() ``` ### 8-12GB VRAM (Entry-level) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing() pipe.enable_vae_slicing() pipe.enable_xformers_memory_efficient_attention() ``` ### <8GB VRAM (Low-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing("max") pipe.enable_vae_slicing() pipe.enable_vae_tiling() ``` IMPORTANT: For FLUX.1-schnell models, do NOT include guidance_scale parameter as it's not needed. Using the OPTIMIZATION KNOWLEDGE BASE above, generate Python code that: 1. **Selects the best optimization techniques** for the specific hardware profile 2. **Applies appropriate memory optimizations** based on available VRAM 3. **Uses optimal data types** for the target hardware: - User specified dtype (if provided): Use exactly as specified - Apple Silicon (MPS): prefer torch.bfloat16 - NVIDIA GPUs: prefer torch.float16 or torch.bfloat16 - CPU only: use torch.float32 4. **Implements hardware-specific optimizations** (CUDA, MPS, CPU) 5. **Follows model-specific guidelines** (e.g., FLUX guidance_scale handling) IMPORTANT GUIDELINES: - Reference the OPTIMIZATION KNOWLEDGE BASE to select appropriate techniques - Include all necessary imports - Add brief comments explaining optimization choices - Generate compact, production-ready code - Inline values where possible for concise code - Generate ONLY the Python code, no explanations before or after the code block 2025-05-30 00:30:05,726 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:30:05,726 - auto_diffusers - INFO - Sending request to Gemini API 2025-05-30 00:30:17,996 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-30 00:30:17,997 - auto_diffusers - DEBUG - Response length: 2079 characters 2025-05-30 00:33:31,874 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:33:31,874 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:33:31,874 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:33:31,874 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:33:31,874 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:33:31,874 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:33:31,874 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:33:31,874 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:33:31,874 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:33:31,874 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:33:31,879 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:33:31,879 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:33:32,357 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:33:32,357 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:33:32,357 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:33:32,357 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:33:32,357 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:33:32,357 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:33:32,357 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:33:32,357 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:33:32,357 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:33:32,357 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:33:32,357 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:33:32,359 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:33:32,371 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:33:32,371 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:33:32,465 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:33:32,498 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:33:32,498 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:33:32,498 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:33:32,499 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:33:32,499 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:33:32,499 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:33:32,499 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:33:32,499 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:33:32 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:33:32,499 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:33:32,500 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:33:32,500 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:33:32,500 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:33:32,500 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:33:32,500 - httpcore.connection - DEBUG - close.started 2025-05-30 00:33:32,500 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:33:32,500 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:33:32,500 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:33:32,500 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:33:32,501 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:33:32,501 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:33:32,501 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:33:32,501 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:33:32,507 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:33:32 GMT'), (b'server', b'uvicorn'), (b'content-length', b'105814'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:33:32,507 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:33:32,507 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:33:32,507 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:33:32,507 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:33:32,507 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:33:32,507 - httpcore.connection - DEBUG - close.started 2025-05-30 00:33:32,507 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:33:32,517 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:33:32,537 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:33:32,537 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:33:32,654 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:33:32,654 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:33:32,677 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:33:32,813 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:33:32,813 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:33:32,813 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:33:32,813 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:33:32,813 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:33:32,813 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:33:32,928 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:33:32,929 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:33:32,929 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:33:32,929 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:33:32,929 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:33:32,929 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:33:32,955 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:33:32 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:33:32,955 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:33:32,955 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:33:32,955 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:33:32,955 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:33:32,955 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:33:32,955 - httpcore.connection - DEBUG - close.started 2025-05-30 00:33:32,956 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:33:33,068 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:33:33 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:33:33,069 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:33:33,069 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:33:33,069 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:33:33,069 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:33:33,069 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:33:33,069 - httpcore.connection - DEBUG - close.started 2025-05-30 00:33:33,069 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:33:33,701 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:33:33,917 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:33:34,332 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:34,332 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:34,332 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:33:34,332 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:33:34,333 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:34,333 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:34,333 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:33:34,333 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:34,333 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:38,242 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:38,242 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:38,242 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:33:38,242 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:38,242 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:38,243 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:33:38,243 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:38,243 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:38,243 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:33:38,243 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-30 00:33:38,243 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-30 00:33:38,243 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-30 00:33:38,243 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': None} 2025-05-30 00:33:38,243 - auto_diffusers - INFO - No GPU detected, using CPU-only profile 2025-05-30 00:33:38,243 - auto_diffusers - INFO - Selected optimization profile: cpu_only 2025-05-30 00:33:38,243 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-30 00:33:38,243 - auto_diffusers - DEBUG - Prompt length: 7566 characters 2025-05-30 00:33:38,243 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:33:38,243 - auto_diffusers - INFO - PROMPT SENT TO GEMINI API: 2025-05-30 00:33:38,243 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:33:38,243 - auto_diffusers - INFO - You are an expert in optimizing diffusers library code for different hardware configurations. NOTE: This system includes curated optimization knowledge from HuggingFace documentation. TASK: Generate optimized Python code for running a diffusion model with the following specifications: - Model: black-forest-labs/FLUX.1-schnell - Prompt: "A cat holding a sign that says hello world" - Image size: 768x1360 - Inference steps: 4 HARDWARE SPECIFICATIONS: - Platform: Linux (manual_input) - CPU Cores: 8 - CUDA Available: False - MPS Available: False - Optimization Profile: cpu_only MEMORY ANALYSIS: - Model Memory Requirements: 36.0 GB (FP16 inference) - Model Weights Size: 24.0 GB (FP16) - Memory Recommendation: 🔄 Requires sequential CPU offloading - Recommended Precision: float16 - Attention Slicing Recommended: True - VAE Slicing Recommended: True OPTIMIZATION KNOWLEDGE BASE: # DIFFUSERS OPTIMIZATION TECHNIQUES ## Memory Optimization Techniques ### 1. Model CPU Offloading Use `enable_model_cpu_offload()` to move models between GPU and CPU automatically: ```python pipe.enable_model_cpu_offload() ``` - Saves significant VRAM by keeping only active models on GPU - Automatic management, no manual intervention needed - Compatible with all pipelines ### 2. Sequential CPU Offloading Use `enable_sequential_cpu_offload()` for more aggressive memory saving: ```python pipe.enable_sequential_cpu_offload() ``` - More memory efficient than model offloading - Moves models to CPU after each forward pass - Best for very limited VRAM scenarios ### 3. Attention Slicing Use `enable_attention_slicing()` to reduce memory during attention computation: ```python pipe.enable_attention_slicing() # or specify slice size pipe.enable_attention_slicing("max") # maximum slicing pipe.enable_attention_slicing(1) # slice_size = 1 ``` - Trades compute time for memory - Most effective for high-resolution images - Can be combined with other techniques ### 4. VAE Slicing Use `enable_vae_slicing()` for large batch processing: ```python pipe.enable_vae_slicing() ``` - Decodes images one at a time instead of all at once - Essential for batch sizes > 4 - Minimal performance impact on single images ### 5. VAE Tiling Use `enable_vae_tiling()` for high-resolution image generation: ```python pipe.enable_vae_tiling() ``` - Enables 4K+ image generation on 8GB VRAM - Splits images into overlapping tiles - Automatically disabled for 512x512 or smaller images ### 6. Memory Efficient Attention (xFormers) Use `enable_xformers_memory_efficient_attention()` if xFormers is installed: ```python pipe.enable_xformers_memory_efficient_attention() ``` - Significantly reduces memory usage and improves speed - Requires xformers library installation - Compatible with most models ## Performance Optimization Techniques ### 1. Half Precision (FP16/BF16) Use lower precision for better memory and speed: ```python # FP16 (widely supported) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) # BF16 (better numerical stability, newer hardware) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) ``` - FP16: Halves memory usage, widely supported - BF16: Better numerical stability, requires newer GPUs - Essential for most optimization scenarios ### 2. Torch Compile (PyTorch 2.0+) Use `torch.compile()` for significant speed improvements: ```python pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) # For some models, compile VAE too: pipe.vae.decode = torch.compile(pipe.vae.decode, mode="reduce-overhead", fullgraph=True) ``` - 5-50% speed improvement - Requires PyTorch 2.0+ - First run is slower due to compilation ### 3. Fast Schedulers Use faster schedulers for fewer steps: ```python from diffusers import LMSDiscreteScheduler, UniPCMultistepScheduler # LMS Scheduler (good quality, fast) pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) # UniPC Scheduler (fastest) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` ## Hardware-Specific Optimizations ### NVIDIA GPU Optimizations ```python # Enable Tensor Cores torch.backends.cudnn.benchmark = True # Optimal data type for NVIDIA torch_dtype = torch.float16 # or torch.bfloat16 for RTX 30/40 series ``` ### Apple Silicon (MPS) Optimizations ```python # Use MPS device device = "mps" if torch.backends.mps.is_available() else "cpu" pipe = pipe.to(device) # Recommended dtype for Apple Silicon torch_dtype = torch.bfloat16 # Better than float16 on Apple Silicon # Attention slicing often helps on MPS pipe.enable_attention_slicing() ``` ### CPU Optimizations ```python # Use float32 for CPU torch_dtype = torch.float32 # Enable optimized attention pipe.enable_attention_slicing() ``` ## Model-Specific Guidelines ### FLUX Models - Do NOT use guidance_scale parameter (not needed for FLUX) - Use 4-8 inference steps maximum - BF16 dtype recommended - Enable attention slicing for memory optimization ### Stable Diffusion XL - Enable attention slicing for high resolutions - Use refiner model sparingly to save memory - Consider VAE tiling for >1024px images ### Stable Diffusion 1.5/2.1 - Very memory efficient base models - Can often run without optimizations on 8GB+ VRAM - Enable VAE slicing for batch processing ## Memory Usage Estimation - FLUX.1: ~24GB for full precision, ~12GB for FP16 - SDXL: ~7GB for FP16, ~14GB for FP32 - SD 1.5: ~2GB for FP16, ~4GB for FP32 ## Optimization Combinations by VRAM ### 24GB+ VRAM (High-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` ### 12-24GB VRAM (Mid-range) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_model_cpu_offload() pipe.enable_xformers_memory_efficient_attention() ``` ### 8-12GB VRAM (Entry-level) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing() pipe.enable_vae_slicing() pipe.enable_xformers_memory_efficient_attention() ``` ### <8GB VRAM (Low-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing("max") pipe.enable_vae_slicing() pipe.enable_vae_tiling() ``` IMPORTANT: For FLUX.1-schnell models, do NOT include guidance_scale parameter as it's not needed. Using the OPTIMIZATION KNOWLEDGE BASE above, generate Python code that: 1. **Selects the best optimization techniques** for the specific hardware profile 2. **Applies appropriate memory optimizations** based on available VRAM 3. **Uses optimal data types** for the target hardware: - User specified dtype (if provided): Use exactly as specified - Apple Silicon (MPS): prefer torch.bfloat16 - NVIDIA GPUs: prefer torch.float16 or torch.bfloat16 - CPU only: use torch.float32 4. **Implements hardware-specific optimizations** (CUDA, MPS, CPU) 5. **Follows model-specific guidelines** (e.g., FLUX guidance_scale handling) IMPORTANT GUIDELINES: - Reference the OPTIMIZATION KNOWLEDGE BASE to select appropriate techniques - Include all necessary imports - Add brief comments explaining optimization choices - Generate compact, production-ready code - Inline values where possible for concise code - Generate ONLY the Python code, no explanations before or after the code block 2025-05-30 00:33:38,244 - auto_diffusers - INFO - ================================================================================ 2025-05-30 00:33:38,244 - auto_diffusers - INFO - Sending request to Gemini API 2025-05-30 00:33:51,592 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-30 00:33:51,593 - auto_diffusers - DEBUG - Response length: 1708 characters 2025-05-30 00:36:05,006 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:36:05,006 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:36:05,006 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:36:05,006 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:36:05,006 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:36:05,006 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:36:05,006 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:36:05,006 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:36:05,006 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:36:05,006 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:36:05,009 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:36:05,009 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:36:05,422 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:36:05,422 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:36:05,422 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:36:05,422 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:36:05,422 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:36:05,422 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:36:05,422 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:36:05,422 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:36:05,422 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:36:05,422 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:36:05,422 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:36:05,424 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:36:05,437 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:36:05,442 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:36:05,513 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:36:05,548 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:36:05,548 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:36:05,548 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:36:05,548 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:36:05,549 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:36:05,549 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:36:05,549 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:36:05,549 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:36:05 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:36:05,549 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:36:05,549 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:36:05,549 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:36:05,549 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:36:05,549 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:36:05,549 - httpcore.connection - DEBUG - close.started 2025-05-30 00:36:05,550 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:36:05,550 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:36:05,550 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:36:05,550 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:36:05,550 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:36:05,550 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:36:05,551 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:36:05,551 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:36:05,557 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:36:05 GMT'), (b'server', b'uvicorn'), (b'content-length', b'105805'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:36:05,557 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:36:05,557 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:36:05,557 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:36:05,557 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:36:05,557 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:36:05,557 - httpcore.connection - DEBUG - close.started 2025-05-30 00:36:05,557 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:36:05,568 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:36:05,681 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:36:05,681 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:36:05,718 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:36:05,718 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:36:05,734 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:36:05,966 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:36:05,966 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:36:05,967 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:36:05,967 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:36:05,967 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:36:05,967 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:36:06,020 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:36:06,020 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:36:06,020 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:36:06,020 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:36:06,021 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:36:06,021 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:36:06,109 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:36:06 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:36:06,110 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:36:06,110 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:36:06,110 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:36:06,110 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:36:06,110 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:36:06,110 - httpcore.connection - DEBUG - close.started 2025-05-30 00:36:06,110 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:36:06,172 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:36:06 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:36:06,172 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:36:06,172 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:36:06,172 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:36:06,172 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:36:06,172 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:36:06,172 - httpcore.connection - DEBUG - close.started 2025-05-30 00:36:06,172 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:36:06,788 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:36:07,005 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:36:25,970 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:36:25,971 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:36:25,971 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:36:25,971 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:36:25,971 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:36:25,971 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:36:25,971 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:36:25,971 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:36:25,972 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:24,618 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:41:24,618 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:41:24,618 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:41:24,618 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:41:24,618 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:41:24,618 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:41:24,618 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:41:24,618 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:41:24,618 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:41:24,618 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:41:24,622 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:41:24,623 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:41:25,060 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:41:25,060 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:41:25,060 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:41:25,060 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:41:25,060 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:41:25,060 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:41:25,060 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:41:25,060 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:41:25,060 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:41:25,060 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:41:25,060 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:41:25,062 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:41:25,076 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:41:25,082 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:41:25,164 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:41:25,195 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:41:25,196 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:41:25,196 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:41:25,196 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:41:25,196 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:41:25,197 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:41:25,197 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:41:25,197 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:41:25 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:41:25,197 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:41:25,197 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:41:25,197 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:41:25,197 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:41:25,197 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:41:25,197 - httpcore.connection - DEBUG - close.started 2025-05-30 00:41:25,197 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:41:25,198 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:41:25,198 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:41:25,198 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:41:25,198 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:41:25,198 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:41:25,198 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:41:25,198 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:41:25,205 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:41:25 GMT'), (b'server', b'uvicorn'), (b'content-length', b'104937'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:41:25,206 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:41:25,206 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:41:25,206 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:41:25,206 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:41:25,206 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:41:25,206 - httpcore.connection - DEBUG - close.started 2025-05-30 00:41:25,206 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:41:25,217 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:41:25,326 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:41:25,326 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:41:25,367 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:41:25,371 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:41:25,372 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:41:25,608 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:41:25,608 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:41:25,609 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:41:25,609 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:41:25,609 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:41:25,609 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:41:25,681 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:41:25,682 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:41:25,682 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:41:25,682 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:41:25,682 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:41:25,682 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:41:25,750 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:41:25 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:41:25,751 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:41:25,751 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:41:25,751 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:41:25,751 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:41:25,751 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:41:25,751 - httpcore.connection - DEBUG - close.started 2025-05-30 00:41:25,751 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:41:25,839 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:41:25 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:41:25,840 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:41:25,840 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:41:25,841 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:41:25,841 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:41:25,841 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:41:25,841 - httpcore.connection - DEBUG - close.started 2025-05-30 00:41:25,841 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:41:26,445 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:41:26,683 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:41:27,126 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:27,127 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:27,127 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:41:27,127 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:41:27,127 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:27,127 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:27,127 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:41:27,128 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:27,128 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:28,862 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:28,863 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:28,863 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:41:28,863 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:28,863 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:28,863 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:41:28,863 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:28,863 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:30,138 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:30,139 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:30,139 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:41:30,139 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:30,139 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:30,139 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:41:30,139 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:41:30,140 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:43:45,643 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:43:45,643 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:43:45,643 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:43:45,643 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:43:45,644 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:43:45,644 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:43:45,644 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:43:45,644 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:43:45,644 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:43:45,644 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:43:45,647 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:43:45,647 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:43:46,100 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:43:46,100 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:43:46,100 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:43:46,100 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:43:46,100 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:43:46,100 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:43:46,100 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:43:46,100 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:43:46,100 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:43:46,100 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:43:46,100 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:43:46,103 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:43:46,116 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:43:46,120 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:43:46,191 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:43:46,228 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:43:46,228 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:43:46,228 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:43:46,229 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:43:46,229 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:43:46,229 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:43:46,229 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:43:46,229 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:43:46 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:43:46,229 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:43:46,229 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:43:46,230 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:43:46,230 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:43:46,230 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:43:46,230 - httpcore.connection - DEBUG - close.started 2025-05-30 00:43:46,230 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:43:46,230 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:43:46,230 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:43:46,230 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:43:46,231 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:43:46,231 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:43:46,231 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:43:46,231 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:43:46,237 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:43:46 GMT'), (b'server', b'uvicorn'), (b'content-length', b'105071'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:43:46,237 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:43:46,237 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:43:46,237 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:43:46,237 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:43:46,237 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:43:46,237 - httpcore.connection - DEBUG - close.started 2025-05-30 00:43:46,237 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:43:46,247 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:43:46,317 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:43:46,317 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:43:46,384 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:43:46,384 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:43:46,394 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:43:46,603 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:43:46,604 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:43:46,604 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:43:46,605 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:43:46,605 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:43:46,605 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:43:46,659 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:43:46,660 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:43:46,660 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:43:46,661 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:43:46,661 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:43:46,661 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:43:46,753 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:43:46 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:43:46,753 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:43:46,754 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:43:46,754 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:43:46,754 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:43:46,754 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:43:46,754 - httpcore.connection - DEBUG - close.started 2025-05-30 00:43:46,755 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:43:46,800 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:43:46 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:43:46,800 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:43:46,800 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:43:46,801 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:43:46,801 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:43:46,801 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:43:46,801 - httpcore.connection - DEBUG - close.started 2025-05-30 00:43:46,801 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:43:47,349 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:43:47,349 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:43:47,349 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:43:47,349 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:43:47,349 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:43:47,349 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:43:47,350 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:43:47,350 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:43:47,350 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:43:47,379 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:43:47,604 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:45:54,947 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:45:54,947 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:45:54,947 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:45:54,947 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:45:54,947 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:45:54,947 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:45:54,947 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:45:54,947 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:45:54,947 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:45:54,947 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:45:54,950 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:45:54,950 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:45:55,354 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:45:55,354 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:45:55,354 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:45:55,354 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:45:55,354 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:45:55,354 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:45:55,354 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:45:55,354 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:45:55,354 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:45:55,354 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:45:55,354 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:45:55,356 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:45:55,369 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:45:55,374 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:45:55,445 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:45:55,480 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:45:55,480 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:45:55,480 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:45:55,481 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:45:55,481 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:45:55,481 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:45:55,481 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:45:55,481 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:45:55 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:45:55,482 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:45:55,482 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:45:55,482 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:45:55,482 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:45:55,482 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:45:55,482 - httpcore.connection - DEBUG - close.started 2025-05-30 00:45:55,482 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:45:55,482 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:45:55,483 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:45:55,483 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:45:55,483 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:45:55,483 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:45:55,483 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:45:55,483 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:45:55,489 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:45:55 GMT'), (b'server', b'uvicorn'), (b'content-length', b'103733'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:45:55,489 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:45:55,489 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:45:55,489 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:45:55,489 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:45:55,489 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:45:55,489 - httpcore.connection - DEBUG - close.started 2025-05-30 00:45:55,489 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:45:55,500 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:45:55,532 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:45:55,532 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:45:55,637 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:45:55,638 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:45:55,646 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:45:55,812 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:45:55,812 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:45:55,812 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:45:55,812 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:45:55,812 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:45:55,812 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:45:55,912 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:45:55,913 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:45:55,913 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:45:55,913 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:45:55,913 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:45:55,913 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:45:55,954 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:45:55 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:45:55,954 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:45:55,954 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:45:55,954 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:45:55,955 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:45:55,955 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:45:55,955 - httpcore.connection - DEBUG - close.started 2025-05-30 00:45:55,955 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:45:56,053 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:45:55 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:45:56,054 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:45:56,054 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:45:56,055 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:45:56,055 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:45:56,055 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:45:56,055 - httpcore.connection - DEBUG - close.started 2025-05-30 00:45:56,056 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:45:56,657 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:45:56,876 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:45:58,569 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:45:58,569 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:45:58,569 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:45:58,569 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:45:58,569 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:45:58,569 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:45:58,569 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:45:58,570 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:45:58,570 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:47:58,681 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:47:58,682 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:47:58,682 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:47:58,682 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:47:58,682 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:47:58,682 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:47:58,682 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:47:58,682 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:47:58,682 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:47:58,682 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:47:58,685 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:47:58,685 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:47:59,088 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:47:59,088 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:47:59,088 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:47:59,088 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:47:59,088 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:47:59,088 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:47:59,088 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:47:59,088 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:47:59,089 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:47:59,089 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:47:59,089 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:47:59,091 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:47:59,104 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:47:59,109 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:47:59,179 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:47:59,214 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:47:59,215 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:47:59,215 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:47:59,215 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:47:59,215 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:47:59,215 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:47:59,216 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:47:59,216 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:47:59 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:47:59,216 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:47:59,216 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:47:59,216 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:47:59,216 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:47:59,216 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:47:59,216 - httpcore.connection - DEBUG - close.started 2025-05-30 00:47:59,216 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:47:59,217 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:47:59,217 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:47:59,217 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:47:59,217 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:47:59,217 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:47:59,217 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:47:59,217 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:47:59,223 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:47:59 GMT'), (b'server', b'uvicorn'), (b'content-length', b'103526'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:47:59,224 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:47:59,224 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:47:59,224 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:47:59,224 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:47:59,224 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:47:59,224 - httpcore.connection - DEBUG - close.started 2025-05-30 00:47:59,224 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:47:59,235 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:47:59,375 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:47:59,424 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:47:59,424 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:47:59,426 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:47:59,426 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:47:59,699 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:47:59,699 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:47:59,700 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:47:59,700 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:47:59,700 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:47:59,700 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:47:59,705 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:47:59,705 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:47:59,706 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:47:59,706 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:47:59,706 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:47:59,707 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:47:59,840 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:47:59 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:47:59,841 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:47:59,841 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:47:59,842 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:47:59,842 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:47:59,842 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:47:59,843 - httpcore.connection - DEBUG - close.started 2025-05-30 00:47:59,843 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:47:59,854 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:47:59 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:47:59,856 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:47:59,856 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:47:59,856 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:47:59,856 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:47:59,856 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:47:59,857 - httpcore.connection - DEBUG - close.started 2025-05-30 00:47:59,857 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:48:00,536 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:48:00,763 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:48:01,404 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:48:01,404 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:48:01,404 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:48:01,404 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:48:01,404 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:48:01,404 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:48:01,404 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:48:01,404 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:48:01,404 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:49:30,798 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:49:30,798 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:49:30,798 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:49:30,798 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:49:30,798 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:49:30,798 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:49:30,798 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:49:30,798 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:49:30,798 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:49:30,798 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:49:30,803 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:49:30,803 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:49:31,302 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:49:31,302 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:49:31,302 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:49:31,302 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:49:31,302 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:49:31,302 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:49:31,302 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:49:31,302 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:49:31,302 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:49:31,302 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:49:31,302 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:49:31,304 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:49:31,318 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:49:31,323 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:49:31,393 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:49:31,429 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:49:31,429 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:49:31,429 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:49:31,429 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:49:31,429 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:49:31,430 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:49:31,430 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:49:31,430 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:49:31 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:49:31,430 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:49:31,430 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:49:31,430 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:49:31,430 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:49:31,430 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:49:31,430 - httpcore.connection - DEBUG - close.started 2025-05-30 00:49:31,430 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:49:31,431 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:49:31,431 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:49:31,431 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:49:31,431 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:49:31,431 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:49:31,431 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:49:31,431 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:49:31,437 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:49:31 GMT'), (b'server', b'uvicorn'), (b'content-length', b'102528'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:49:31,437 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:49:31,438 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:49:31,438 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:49:31,438 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:49:31,438 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:49:31,438 - httpcore.connection - DEBUG - close.started 2025-05-30 00:49:31,438 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:49:31,448 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:49:31,483 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:49:31,483 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:49:31,588 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:49:31,588 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:49:31,648 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:49:31,769 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:49:31,770 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:49:31,770 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:49:31,771 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:49:31,771 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:49:31,771 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:49:31,957 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:49:31 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:49:31,958 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:49:31,959 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:49:31,959 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:49:31,959 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:49:31,960 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:49:31,960 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:49:31,960 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:49:31,960 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:49:31,960 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:49:31,960 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:49:31,961 - httpcore.connection - DEBUG - close.started 2025-05-30 00:49:31,961 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:49:31,961 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:49:32,105 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:49:32 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:49:32,106 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:49:32,106 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:49:32,107 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:49:32,107 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:49:32,107 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:49:32,107 - httpcore.connection - DEBUG - close.started 2025-05-30 00:49:32,108 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:49:32,646 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:49:32,647 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:49:32,647 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:49:32,647 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:49:32,647 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:49:32,647 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:49:32,647 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:49:32,647 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:49:32,647 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:49:32,765 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:49:33,005 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:53:47,386 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 00:53:47,386 - __main__ - DEBUG - API key found, length: 39 2025-05-30 00:53:47,386 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 00:53:47,386 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 00:53:47,386 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 00:53:47,386 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 00:53:47,386 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 00:53:47,386 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 00:53:47,386 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 00:53:47,386 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 00:53:47,392 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 00:53:47,392 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 00:53:47,845 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 00:53:47,845 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 00:53:47,845 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 00:53:47,845 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 00:53:47,845 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 00:53:47,845 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 00:53:47,845 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 00:53:47,845 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 00:53:47,845 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 00:53:47,845 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 00:53:47,845 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 00:53:47,847 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:53:47,861 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 00:53:47,861 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:53:47,946 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 00:53:47,981 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 00:53:47,982 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:53:47,982 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:53:47,983 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:53:47,985 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:53:47,985 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:53:47,985 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:53:47,985 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:53:47 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 00:53:47,986 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 00:53:47,986 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:53:47,986 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:53:47,986 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:53:47,986 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:53:47,986 - httpcore.connection - DEBUG - close.started 2025-05-30 00:53:47,986 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:53:47,986 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 00:53:47,987 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:53:47,987 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:53:47,987 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:53:47,987 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:53:47,987 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:53:47,988 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:53:47,994 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 15:53:47 GMT'), (b'server', b'uvicorn'), (b'content-length', b'95944'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 00:53:47,994 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 00:53:47,994 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:53:47,994 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:53:47,994 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:53:47,994 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:53:47,994 - httpcore.connection - DEBUG - close.started 2025-05-30 00:53:47,994 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:53:48,005 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 00:53:48,021 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:53:48,022 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 00:53:48,142 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 00:53:48,142 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 00:53:48,149 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 00:53:48,296 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:53:48,296 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:53:48,297 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:53:48,297 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:53:48,297 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:53:48,297 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:53:48,420 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 00:53:48,420 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 00:53:48,420 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 00:53:48,420 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 00:53:48,420 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 00:53:48,420 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 00:53:48,434 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:53:48 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 00:53:48,435 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 00:53:48,435 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:53:48,435 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:53:48,435 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:53:48,435 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:53:48,436 - httpcore.connection - DEBUG - close.started 2025-05-30 00:53:48,436 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:53:48,560 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 15:53:48 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 00:53:48,561 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 00:53:48,562 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 00:53:48,562 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 00:53:48,563 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 00:53:48,563 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 00:53:48,563 - httpcore.connection - DEBUG - close.started 2025-05-30 00:53:48,563 - httpcore.connection - DEBUG - close.complete 2025-05-30 00:53:49,159 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 00:53:49,382 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 00:56:51,723 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:56:51,724 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:56:51,724 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 00:56:51,724 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 00:56:51,725 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:56:51,725 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 00:56:51,725 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 00:56:51,725 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 00:56:51,725 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:36,891 - __main__ - INFO - Initializing GradioAutodiffusers 2025-05-30 01:00:36,891 - __main__ - DEBUG - API key found, length: 39 2025-05-30 01:00:36,891 - auto_diffusers - INFO - Initializing AutoDiffusersGenerator 2025-05-30 01:00:36,891 - auto_diffusers - DEBUG - API key length: 39 2025-05-30 01:00:36,891 - auto_diffusers - WARNING - Tool calling dependencies not available, running without tools 2025-05-30 01:00:36,891 - hardware_detector - INFO - Initializing HardwareDetector 2025-05-30 01:00:36,891 - hardware_detector - DEBUG - Starting system hardware detection 2025-05-30 01:00:36,891 - hardware_detector - DEBUG - Platform: Darwin, Architecture: arm64 2025-05-30 01:00:36,891 - hardware_detector - DEBUG - CPU cores: 16, Python: 3.11.11 2025-05-30 01:00:36,891 - hardware_detector - DEBUG - Attempting GPU detection via nvidia-smi 2025-05-30 01:00:36,895 - hardware_detector - DEBUG - nvidia-smi not found, no NVIDIA GPU detected 2025-05-30 01:00:36,895 - hardware_detector - DEBUG - Checking PyTorch availability 2025-05-30 01:00:37,384 - hardware_detector - INFO - PyTorch 2.7.0 detected 2025-05-30 01:00:37,384 - hardware_detector - DEBUG - CUDA available: False, MPS available: True 2025-05-30 01:00:37,384 - hardware_detector - INFO - Hardware detection completed successfully 2025-05-30 01:00:37,384 - hardware_detector - DEBUG - Detected specs: {'platform': 'Darwin', 'architecture': 'arm64', 'cpu_count': 16, 'python_version': '3.11.11', 'gpu_info': None, 'cuda_available': False, 'mps_available': True, 'torch_version': '2.7.0'} 2025-05-30 01:00:37,384 - auto_diffusers - INFO - Hardware detector initialized successfully 2025-05-30 01:00:37,384 - __main__ - INFO - AutoDiffusersGenerator initialized successfully 2025-05-30 01:00:37,384 - simple_memory_calculator - INFO - Initializing SimpleMemoryCalculator 2025-05-30 01:00:37,384 - simple_memory_calculator - DEBUG - HuggingFace API initialized 2025-05-30 01:00:37,384 - simple_memory_calculator - DEBUG - Known models in database: 4 2025-05-30 01:00:37,384 - __main__ - INFO - SimpleMemoryCalculator initialized successfully 2025-05-30 01:00:37,384 - __main__ - DEBUG - Default model settings: gemini-2.5-flash-preview-05-20, temp=0.7 2025-05-30 01:00:37,386 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 01:00:37,399 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=3 socket_options=None 2025-05-30 01:00:37,399 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 01:00:37,476 - asyncio - DEBUG - Using selector: KqueueSelector 2025-05-30 01:00:37,512 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=None socket_options=None 2025-05-30 01:00:37,512 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 01:00:37,513 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 01:00:37,513 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 01:00:37,513 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 01:00:37,513 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 01:00:37,513 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 01:00:37,513 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 16:00:37 GMT'), (b'server', b'uvicorn'), (b'content-length', b'4'), (b'content-type', b'application/json')]) 2025-05-30 01:00:37,513 - httpx - INFO - HTTP Request: GET http://localhost:7861/gradio_api/startup-events "HTTP/1.1 200 OK" 2025-05-30 01:00:37,514 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 01:00:37,514 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 01:00:37,514 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 01:00:37,514 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 01:00:37,514 - httpcore.connection - DEBUG - close.started 2025-05-30 01:00:37,514 - httpcore.connection - DEBUG - close.complete 2025-05-30 01:00:37,514 - httpcore.connection - DEBUG - connect_tcp.started host='localhost' port=7861 local_address=None timeout=3 socket_options=None 2025-05-30 01:00:37,515 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 01:00:37,515 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 01:00:37,515 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 01:00:37,515 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 01:00:37,515 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 01:00:37,515 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 01:00:37,521 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'date', b'Thu, 29 May 2025 16:00:37 GMT'), (b'server', b'uvicorn'), (b'content-length', b'95123'), (b'content-type', b'text/html; charset=utf-8')]) 2025-05-30 01:00:37,521 - httpx - INFO - HTTP Request: HEAD http://localhost:7861/ "HTTP/1.1 200 OK" 2025-05-30 01:00:37,521 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 01:00:37,521 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 01:00:37,521 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 01:00:37,521 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 01:00:37,521 - httpcore.connection - DEBUG - close.started 2025-05-30 01:00:37,521 - httpcore.connection - DEBUG - close.complete 2025-05-30 01:00:37,532 - httpcore.connection - DEBUG - connect_tcp.started host='api.gradio.app' port=443 local_address=None timeout=30 socket_options=None 2025-05-30 01:00:37,634 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 01:00:37,635 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=3 2025-05-30 01:00:37,672 - httpcore.connection - DEBUG - connect_tcp.complete return_value= 2025-05-30 01:00:37,672 - httpcore.connection - DEBUG - start_tls.started ssl_context= server_hostname='api.gradio.app' timeout=30 2025-05-30 01:00:37,681 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/initiated HTTP/1.1" 200 0 2025-05-30 01:00:37,911 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 01:00:37,911 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 01:00:37,912 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 01:00:37,912 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 01:00:37,913 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 01:00:37,913 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 01:00:37,953 - httpcore.connection - DEBUG - start_tls.complete return_value= 2025-05-30 01:00:37,954 - httpcore.http11 - DEBUG - send_request_headers.started request= 2025-05-30 01:00:37,955 - httpcore.http11 - DEBUG - send_request_headers.complete 2025-05-30 01:00:37,955 - httpcore.http11 - DEBUG - send_request_body.started request= 2025-05-30 01:00:37,955 - httpcore.http11 - DEBUG - send_request_body.complete 2025-05-30 01:00:37,955 - httpcore.http11 - DEBUG - receive_response_headers.started request= 2025-05-30 01:00:38,052 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 16:00:37 GMT'), (b'Content-Type', b'application/json'), (b'Content-Length', b'21'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'Access-Control-Allow-Origin', b'*')]) 2025-05-30 01:00:38,053 - httpx - INFO - HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK" 2025-05-30 01:00:38,054 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 01:00:38,054 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 01:00:38,055 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 01:00:38,055 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 01:00:38,055 - httpcore.connection - DEBUG - close.started 2025-05-30 01:00:38,056 - httpcore.connection - DEBUG - close.complete 2025-05-30 01:00:38,097 - httpcore.http11 - DEBUG - receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'OK', [(b'Date', b'Thu, 29 May 2025 16:00:38 GMT'), (b'Content-Type', b'text/html; charset=utf-8'), (b'Transfer-Encoding', b'chunked'), (b'Connection', b'keep-alive'), (b'Server', b'nginx/1.18.0'), (b'ContentType', b'application/json'), (b'Access-Control-Allow-Origin', b'*'), (b'Content-Encoding', b'gzip')]) 2025-05-30 01:00:38,098 - httpx - INFO - HTTP Request: GET https://api.gradio.app/v3/tunnel-request "HTTP/1.1 200 OK" 2025-05-30 01:00:38,098 - httpcore.http11 - DEBUG - receive_response_body.started request= 2025-05-30 01:00:38,099 - httpcore.http11 - DEBUG - receive_response_body.complete 2025-05-30 01:00:38,099 - httpcore.http11 - DEBUG - response_closed.started 2025-05-30 01:00:38,099 - httpcore.http11 - DEBUG - response_closed.complete 2025-05-30 01:00:38,099 - httpcore.connection - DEBUG - close.started 2025-05-30 01:00:38,100 - httpcore.connection - DEBUG - close.complete 2025-05-30 01:00:38,699 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): huggingface.co:443 2025-05-30 01:00:38,831 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:38,831 - simple_memory_calculator - INFO - Using known memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:38,832 - simple_memory_calculator - DEBUG - Known data: {'params_billions': 12.0, 'fp16_gb': 24.0, 'inference_fp16_gb': 36.0} 2025-05-30 01:00:38,832 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 01:00:38,832 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:38,832 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:38,832 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 01:00:38,832 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:38,832 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:38,915 - urllib3.connectionpool - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0 2025-05-30 01:00:43,208 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:43,209 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:43,209 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 8.0GB VRAM 2025-05-30 01:00:43,209 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:43,209 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:43,209 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 01:00:43,209 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:43,209 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,193 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,194 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,194 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 16.0GB VRAM 2025-05-30 01:00:53,194 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,194 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,194 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 01:00:53,194 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,194 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,245 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,245 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,245 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 16.0GB VRAM 2025-05-30 01:00:53,245 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,245 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,245 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 01:00:53,245 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:00:53,245 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:01:20,157 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:01:20,157 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:01:20,157 - simple_memory_calculator - INFO - Generating memory recommendations for black-forest-labs/FLUX.1-schnell with 16.0GB VRAM 2025-05-30 01:01:20,157 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:01:20,158 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:01:20,158 - simple_memory_calculator - DEBUG - Model memory: 24.0GB, Inference memory: 36.0GB 2025-05-30 01:01:20,158 - simple_memory_calculator - INFO - Getting memory requirements for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:01:20,158 - simple_memory_calculator - DEBUG - Using cached memory data for black-forest-labs/FLUX.1-schnell 2025-05-30 01:01:20,158 - auto_diffusers - INFO - Starting code generation for model: black-forest-labs/FLUX.1-schnell 2025-05-30 01:01:20,158 - auto_diffusers - DEBUG - Parameters: prompt='A cat holding a sign that says hello world...', size=(768, 1360), steps=4 2025-05-30 01:01:20,158 - auto_diffusers - DEBUG - Manual specs: True, Memory analysis provided: True 2025-05-30 01:01:20,158 - auto_diffusers - INFO - Using manual hardware specifications 2025-05-30 01:01:20,158 - auto_diffusers - DEBUG - Manual specs: {'platform': 'Linux', 'architecture': 'manual_input', 'cpu_count': 8, 'python_version': '3.11', 'cuda_available': False, 'mps_available': False, 'torch_version': '2.0+', 'manual_input': True, 'ram_gb': 16, 'user_dtype': None, 'gpu_info': [{'name': 'RTX 5070 Ti', 'memory_mb': 16384}]} 2025-05-30 01:01:20,158 - auto_diffusers - DEBUG - GPU detected with 16.0 GB VRAM 2025-05-30 01:01:20,158 - auto_diffusers - INFO - Selected optimization profile: performance 2025-05-30 01:01:20,158 - auto_diffusers - DEBUG - Creating generation prompt for Gemini API 2025-05-30 01:01:20,158 - auto_diffusers - DEBUG - Prompt length: 7603 characters 2025-05-30 01:01:20,158 - auto_diffusers - INFO - ================================================================================ 2025-05-30 01:01:20,158 - auto_diffusers - INFO - PROMPT SENT TO GEMINI API: 2025-05-30 01:01:20,158 - auto_diffusers - INFO - ================================================================================ 2025-05-30 01:01:20,159 - auto_diffusers - INFO - You are an expert in optimizing diffusers library code for different hardware configurations. NOTE: This system includes curated optimization knowledge from HuggingFace documentation. TASK: Generate optimized Python code for running a diffusion model with the following specifications: - Model: black-forest-labs/FLUX.1-schnell - Prompt: "A cat holding a sign that says hello world" - Image size: 768x1360 - Inference steps: 4 HARDWARE SPECIFICATIONS: - Platform: Linux (manual_input) - CPU Cores: 8 - CUDA Available: False - MPS Available: False - Optimization Profile: performance - GPU: RTX 5070 Ti (16.0 GB VRAM) MEMORY ANALYSIS: - Model Memory Requirements: 36.0 GB (FP16 inference) - Model Weights Size: 24.0 GB (FP16) - Memory Recommendation: 🔄 Requires sequential CPU offloading - Recommended Precision: float16 - Attention Slicing Recommended: True - VAE Slicing Recommended: True OPTIMIZATION KNOWLEDGE BASE: # DIFFUSERS OPTIMIZATION TECHNIQUES ## Memory Optimization Techniques ### 1. Model CPU Offloading Use `enable_model_cpu_offload()` to move models between GPU and CPU automatically: ```python pipe.enable_model_cpu_offload() ``` - Saves significant VRAM by keeping only active models on GPU - Automatic management, no manual intervention needed - Compatible with all pipelines ### 2. Sequential CPU Offloading Use `enable_sequential_cpu_offload()` for more aggressive memory saving: ```python pipe.enable_sequential_cpu_offload() ``` - More memory efficient than model offloading - Moves models to CPU after each forward pass - Best for very limited VRAM scenarios ### 3. Attention Slicing Use `enable_attention_slicing()` to reduce memory during attention computation: ```python pipe.enable_attention_slicing() # or specify slice size pipe.enable_attention_slicing("max") # maximum slicing pipe.enable_attention_slicing(1) # slice_size = 1 ``` - Trades compute time for memory - Most effective for high-resolution images - Can be combined with other techniques ### 4. VAE Slicing Use `enable_vae_slicing()` for large batch processing: ```python pipe.enable_vae_slicing() ``` - Decodes images one at a time instead of all at once - Essential for batch sizes > 4 - Minimal performance impact on single images ### 5. VAE Tiling Use `enable_vae_tiling()` for high-resolution image generation: ```python pipe.enable_vae_tiling() ``` - Enables 4K+ image generation on 8GB VRAM - Splits images into overlapping tiles - Automatically disabled for 512x512 or smaller images ### 6. Memory Efficient Attention (xFormers) Use `enable_xformers_memory_efficient_attention()` if xFormers is installed: ```python pipe.enable_xformers_memory_efficient_attention() ``` - Significantly reduces memory usage and improves speed - Requires xformers library installation - Compatible with most models ## Performance Optimization Techniques ### 1. Half Precision (FP16/BF16) Use lower precision for better memory and speed: ```python # FP16 (widely supported) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) # BF16 (better numerical stability, newer hardware) pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) ``` - FP16: Halves memory usage, widely supported - BF16: Better numerical stability, requires newer GPUs - Essential for most optimization scenarios ### 2. Torch Compile (PyTorch 2.0+) Use `torch.compile()` for significant speed improvements: ```python pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) # For some models, compile VAE too: pipe.vae.decode = torch.compile(pipe.vae.decode, mode="reduce-overhead", fullgraph=True) ``` - 5-50% speed improvement - Requires PyTorch 2.0+ - First run is slower due to compilation ### 3. Fast Schedulers Use faster schedulers for fewer steps: ```python from diffusers import LMSDiscreteScheduler, UniPCMultistepScheduler # LMS Scheduler (good quality, fast) pipe.scheduler = LMSDiscreteScheduler.from_config(pipe.scheduler.config) # UniPC Scheduler (fastest) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) ``` ## Hardware-Specific Optimizations ### NVIDIA GPU Optimizations ```python # Enable Tensor Cores torch.backends.cudnn.benchmark = True # Optimal data type for NVIDIA torch_dtype = torch.float16 # or torch.bfloat16 for RTX 30/40 series ``` ### Apple Silicon (MPS) Optimizations ```python # Use MPS device device = "mps" if torch.backends.mps.is_available() else "cpu" pipe = pipe.to(device) # Recommended dtype for Apple Silicon torch_dtype = torch.bfloat16 # Better than float16 on Apple Silicon # Attention slicing often helps on MPS pipe.enable_attention_slicing() ``` ### CPU Optimizations ```python # Use float32 for CPU torch_dtype = torch.float32 # Enable optimized attention pipe.enable_attention_slicing() ``` ## Model-Specific Guidelines ### FLUX Models - Do NOT use guidance_scale parameter (not needed for FLUX) - Use 4-8 inference steps maximum - BF16 dtype recommended - Enable attention slicing for memory optimization ### Stable Diffusion XL - Enable attention slicing for high resolutions - Use refiner model sparingly to save memory - Consider VAE tiling for >1024px images ### Stable Diffusion 1.5/2.1 - Very memory efficient base models - Can often run without optimizations on 8GB+ VRAM - Enable VAE slicing for batch processing ## Memory Usage Estimation - FLUX.1: ~24GB for full precision, ~12GB for FP16 - SDXL: ~7GB for FP16, ~14GB for FP32 - SD 1.5: ~2GB for FP16, ~4GB for FP32 ## Optimization Combinations by VRAM ### 24GB+ VRAM (High-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) ``` ### 12-24GB VRAM (Mid-range) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_model_cpu_offload() pipe.enable_xformers_memory_efficient_attention() ``` ### 8-12GB VRAM (Entry-level) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing() pipe.enable_vae_slicing() pipe.enable_xformers_memory_efficient_attention() ``` ### <8GB VRAM (Low-end) ```python pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.enable_sequential_cpu_offload() pipe.enable_attention_slicing("max") pipe.enable_vae_slicing() pipe.enable_vae_tiling() ``` IMPORTANT: For FLUX.1-schnell models, do NOT include guidance_scale parameter as it's not needed. Using the OPTIMIZATION KNOWLEDGE BASE above, generate Python code that: 1. **Selects the best optimization techniques** for the specific hardware profile 2. **Applies appropriate memory optimizations** based on available VRAM 3. **Uses optimal data types** for the target hardware: - User specified dtype (if provided): Use exactly as specified - Apple Silicon (MPS): prefer torch.bfloat16 - NVIDIA GPUs: prefer torch.float16 or torch.bfloat16 - CPU only: use torch.float32 4. **Implements hardware-specific optimizations** (CUDA, MPS, CPU) 5. **Follows model-specific guidelines** (e.g., FLUX guidance_scale handling) IMPORTANT GUIDELINES: - Reference the OPTIMIZATION KNOWLEDGE BASE to select appropriate techniques - Include all necessary imports - Add brief comments explaining optimization choices - Generate compact, production-ready code - Inline values where possible for concise code - Generate ONLY the Python code, no explanations before or after the code block 2025-05-30 01:01:20,159 - auto_diffusers - INFO - ================================================================================ 2025-05-30 01:01:20,159 - auto_diffusers - INFO - Sending request to Gemini API 2025-05-30 01:01:43,263 - auto_diffusers - INFO - Successfully received response from Gemini API (no tools used) 2025-05-30 01:01:43,263 - auto_diffusers - DEBUG - Response length: 2591 characters