00000156
宿主机与容器内SDK不匹配会有如下err
root@saas-de-junzhi-junzhi-mr100-8-f326-68695fb794-v8mpd:~# python3 Python 3.10.12 (main, Aug 16 2024, 18:39:09) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/site-packages/torch/__init__.py", line 237, in <module> _load_global_deps() File "/usr/local/lib/python3.10/site-packages/torch/__init__.py", line 196, in _load_global_deps raise err File "/usr/local/lib/python3.10/site-packages/torch/__init__.py", line 177, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/usr/local/lib/python3.10/ctypes/__init__.py", line 374, in __init__ self._handle = _dlopen(self._name, mode) OSError: /usr/local/corex/lib64/libcuda.so.1: undefined symbol: _Z17Thunk_GetLinkInfoiiPv >>> >>> >>> import vllm Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/site-packages/vllm/__init__.py", line 10, in <module> from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs File "/usr/local/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 7, in <module> from vllm.config import (CacheConfig, DecodingConfig, DeviceConfig, File "/usr/local/lib/python3.10/site-packages/vllm/config.py", line 6, in <module> import torch File "/usr/local/lib/python3.10/site-packages/torch/__init__.py", line 237, in <module> _load_global_deps() File "/usr/local/lib/python3.10/site-packages/torch/__init__.py", line 196, in _load_global_deps raise err File "/usr/local/lib/python3.10/site-packages/torch/__init__.py", line 177, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/usr/local/lib/python3.10/ctypes/__init__.py", line 374, in __init__ self._handle = _dlopen(self._name, mode) OSError: /usr/local/corex/lib64/libcuda.so.1: undefined symbol: _Z17Thunk_GetLinkInfoiiPv