|
发表于 2024-3-20 23:24:45
|
显示全部楼层
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\facechain2.0.0\python310\lib\site-packages\gradio\queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
File "D:\facechain2.0.0\python310\lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "D:\facechain2.0.0\python310\lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
File "D:\facechain2.0.0\python310\lib\site-packages\gradio\blocks.py", line 1199, in call_function
prediction = await utils.async_iteration(iterator)
File "D:\facechain2.0.0\python310\lib\site-packages\gradio\utils.py", line 519, in async_iteration
return await iterator.__anext__()
File "D:\facechain2.0.0\python310\lib\site-packages\gradio\utils.py", line 512, in __anext__
return await anyio.to_thread.run_sync(
File "D:\facechain2.0.0\python310\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:\facechain2.0.0\python310\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "D:\facechain2.0.0\python310\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
result = context.run(func, *args)
File "D:\facechain2.0.0\python310\lib\site-packages\gradio\utils.py", line 495, in run_sync_iterator_async
return next(iterator)
File "D:\facechain2.0.0\python310\lib\site-packages\gradio\utils.py", line 649, in gen_wrapper
yield from f(*args, **kwargs)
File "D:\facechain2.0.0\app.py", line 351, in launch_pipeline
outputs = future.result()
File "D:\facechain2.0.0\python310\lib\concurrent\futures\_base.py", line 451, in result
return self.__get_result()
File "D:\facechain2.0.0\python310\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. |
|