[cloud-user@ip-172-31-22-68 ~]$ ./test.sh Username: mdepaulo+stage Password: Login Succeeded! ilab, version 0.23.2 ---------------------------------------------------- Welcome to the InstructLab CLI This guide will help you to setup your environment ---------------------------------------------------- Please provide the following values to initiate the environment [press 'Enter' for default options when prompted] Cloning https://github.com/instructlab/taxonomy.git... Generating config file: /var/home/cloud-user/.config/instructlab/config.yaml INFO 2025-04-08 18:29:07,638 instructlab.config.init:259: Detecting hardware... Please choose a system profile. Profiles set hardware-specific defaults for all commands and sections of the configuration. First, please select the hardware vendor your system falls into [0] NO SYSTEM PROFILE [1] NVIDIA Enter the number of your choice [0]: 1 You selected: NVIDIA Next, please select the specific hardware configuration that most closely matches your system. [0] NO SYSTEM PROFILE [1] NVIDIA L4 X8 [2] NVIDIA L40S X4 [3] NVIDIA L40S X8 [4] NVIDIA H100 X4 [5] NVIDIA H100 X2 [6] NVIDIA H100 X8 [7] NVIDIA A100 X4 [8] NVIDIA A100 X2 [9] NVIDIA A100 X8 Enter the number of your choice [hit enter for hardware defaults] [0]: 1 You selected: /var/home/cloud-user/.local/share/instructlab/internal/system_profiles/nvidia/l4/l4_x8.yaml -------------------------------------------- Initialization completed successfully! You're ready to start using `ilab`. Enjoy! -------------------------------------------- INFO 2025-04-08 18:32:23,040 instructlab.model.download:193: Downloading model from OCI registry: Model: docker://registry.stage.redhat.io/rhelai1/skills-adapter-v3@1.4 Destination: /var/home/cloud-user/.cache/instructlab/models Copying blob 6f4761a5ce47 done | Copying blob cfc7749b96f6 done | Copying blob 01f47425d010 done | Copying blob 7898f431429a done | Copying blob 4452b845ab9c done | Copying blob 488e082ff0d1 done | Copying blob d8d4489231c6 done | Copying blob 5d44fdf2d36d done | Copying config 44136fa355 done | Writing manifest to image destination INFO 2025-04-08 18:32:35,114 instructlab.model.download:289: ᕦ(òᴗóˇ)ᕤ docker://registry.stage.redhat.io/rhelai1/skills-adapter-v3 model download completed successfully! ᕦ(òᴗóˇ)ᕤ INFO 2025-04-08 18:32:35,114 instructlab.model.download:303: Available models (`ilab model list`): +------------+---------------+------+ | Model Name | Last Modified | Size | +------------+---------------+------+ +------------+---------------+------+ INFO 2025-04-08 18:32:42,233 instructlab.model.download:193: Downloading model from OCI registry: Model: docker://registry.stage.redhat.io/rhelai1/knowledge-adapter-v3@1.4 Destination: /var/home/cloud-user/.cache/instructlab/models Copying blob c4334cbcdf17 done | Copying blob 82d96d7a9e6c done | Copying blob e84e60569620 done | Copying blob 488e082ff0d1 done | Copying blob 490c96c184aa done | Copying blob cfc7749b96f6 done | Copying blob 0f17dc4a3b97 done | Copying blob d2313c03a149 done | Copying config 44136fa355 done | Writing manifest to image destination INFO 2025-04-08 18:32:49,256 instructlab.model.download:289: ᕦ(òᴗóˇ)ᕤ docker://registry.stage.redhat.io/rhelai1/knowledge-adapter-v3 model download completed successfully! ᕦ(òᴗóˇ)ᕤ INFO 2025-04-08 18:32:49,256 instructlab.model.download:303: Available models (`ilab model list`): +------------+---------------+------+ | Model Name | Last Modified | Size | +------------+---------------+------+ +------------+---------------+------+ INFO 2025-04-08 18:32:56,368 instructlab.model.download:193: Downloading model from OCI registry: Model: docker://registry.stage.redhat.io/rhelai1/granite-3.1-8b-lab-v1@1.4 Destination: /var/home/cloud-user/.cache/instructlab/models Copying blob ee911225bc65 done | Copying blob 3411b4c69b70 done | Copying blob 72438510a985 done | Copying blob 9e5c20e42c39 done | Copying blob dd233853f746 done | Copying blob f609657ba3e3 done | Copying blob 935d3259d28f done | Copying blob 5c78b58d992e done | Copying blob 625f5206d172 done | Copying blob adde9662090e done | Copying config 44136fa355 done | Writing manifest to image destination INFO 2025-04-08 18:36:00,611 instructlab.model.download:289: ᕦ(òᴗóˇ)ᕤ docker://registry.stage.redhat.io/rhelai1/granite-3.1-8b-lab-v1 model download completed successfully! ᕦ(òᴗóˇ)ᕤ INFO 2025-04-08 18:36:00,612 instructlab.model.download:303: Available models (`ilab model list`): +------------------------------+---------------------+---------+ | Model Name | Last Modified | Size | +------------------------------+---------------------+---------+ | models/granite-3.1-8b-lab-v1 | 2025-04-08 18:36:00 | 15.2 GB | +------------------------------+---------------------+---------+ INFO 2025-04-08 18:36:07,784 instructlab.model.download:193: Downloading model from OCI registry: Model: docker://registry.stage.redhat.io/rhelai1/granite-3.1-8b-starter-v1@1.4 Destination: /var/home/cloud-user/.cache/instructlab/models Copying blob 22b1424e35df done | Copying blob acc250559fc1 done | Copying blob b19c07c7ada5 done | Copying blob a05a85bd5165 done | Copying blob dd233853f746 done | Copying blob f609657ba3e3 done | Copying blob 935d3259d28f done | Copying blob 5c78b58d992e done | Copying blob 625f5206d172 done | Copying blob adde9662090e done | Copying config 44136fa355 done | Writing manifest to image destination INFO 2025-04-08 18:39:07,727 instructlab.model.download:289: ᕦ(òᴗóˇ)ᕤ docker://registry.stage.redhat.io/rhelai1/granite-3.1-8b-starter-v1 model download completed successfully! ᕦ(òᴗóˇ)ᕤ INFO 2025-04-08 18:39:07,727 instructlab.model.download:303: Available models (`ilab model list`): +----------------------------------+---------------------+---------+ | Model Name | Last Modified | Size | +----------------------------------+---------------------+---------+ | models/granite-3.1-8b-lab-v1 | 2025-04-08 18:36:00 | 15.2 GB | | models/granite-3.1-8b-starter-v1 | 2025-04-08 18:39:07 | 15.2 GB | +----------------------------------+---------------------+---------+ INFO 2025-04-08 18:39:15,000 instructlab.model.download:193: Downloading model from OCI registry: Model: docker://registry.stage.redhat.io/rhelai1/mixtral-8x7b-instruct-v0-1@1.4 Destination: /var/home/cloud-user/.cache/instructlab/models Copying blob d0b63fca793c done | Copying blob 47324f06fdb5 done | Copying blob 40e6ecbcedfc done | Copying blob 54669c5aec29 done | Copying blob 29e15364d8ab done | Copying blob 9d56d04b36d0 done | Copying blob 67e0596920fe done | Copying blob e330eabd70b4 done | Copying blob 048fa5347877 done | Copying blob 83bfed6169c1 done | Copying blob af316ad78402 done | Copying blob 5882e4366c63 done | Copying blob 77813d1dbee6 done | Copying blob ff24540d9967 done | Copying blob 48bc12845676 done | Copying blob e56a2e7eda69 done | Copying blob da627f6a3c8f done | Copying blob 61e0f22bff93 done | Copying blob 76466bfc2312 done | Copying blob 570af3b802be done | Copying blob 4c603b65cbd5 done | Copying blob 272f33c76bca done | Copying blob a8f30ebfaf56 done | Copying blob 6fa06efa2785 done | Copying blob 11c08db21487 done | Copying blob dadfd56d7667 done | Copying blob 475361439e5c done | Copying config 44136fa355 done | Writing manifest to image destination INFO 2025-04-08 18:50:32,865 instructlab.model.download:289: ᕦ(òᴗóˇ)ᕤ docker://registry.stage.redhat.io/rhelai1/mixtral-8x7b-instruct-v0-1 model download completed successfully! ᕦ(òᴗóˇ)ᕤ INFO 2025-04-08 18:50:32,866 instructlab.model.download:303: Available models (`ilab model list`): +-----------------------------------+---------------------+---------+ | Model Name | Last Modified | Size | +-----------------------------------+---------------------+---------+ | models/granite-3.1-8b-lab-v1 | 2025-04-08 18:36:00 | 15.2 GB | | models/granite-3.1-8b-starter-v1 | 2025-04-08 18:39:07 | 15.2 GB | | models/mixtral-8x7b-instruct-v0-1 | 2025-04-08 18:50:32 | 87.0 GB | +-----------------------------------+---------------------+---------+ INFO 2025-04-08 18:50:45,925 instructlab.model.download:193: Downloading model from OCI registry: Model: docker://registry.stage.redhat.io/rhelai1/prometheus-8x7b-v2-0@1.4 Destination: /var/home/cloud-user/.cache/instructlab/models Copying blob a375e93d6f89 done | Copying blob 17e420ee7a3c done | Copying blob cc0b434114a0 done | Copying blob 40e6ecbcedfc done | Copying blob 9d56d04b36d0 done | Copying blob 45147a3fae61 done | Copying blob 07529e846183 done | Copying blob 69239081714b done | Copying blob 82ba1df1bcff done | Copying blob 7dfbb89db40a done | Copying blob d6b91c38dcac done | Copying blob 042fa6758c75 done | Copying blob fc2658c9dba2 done | Copying blob 958bf1eb6fc6 done | Copying blob 4cfc38eabca1 done | Copying blob d89723805505 done | Copying blob ad148e16985f done | Copying blob 520bd83ae1b8 done | Copying blob 189922a4c16e done | Copying blob 96b05ad26199 done | Copying blob e6086166348b done | Copying blob af6f32190c41 done | Copying blob 92470b0bd930 done | Copying blob a8f30ebfaf56 done | Copying blob 96bdbb8504d9 done | Copying blob fc4f0bd70b37 done | Copying blob dadfd56d7667 done | Copying blob 7ada2fa1461c done | Copying config 44136fa355 done | Writing manifest to image destination INFO 2025-04-08 19:01:34,568 instructlab.model.download:289: ᕦ(òᴗóˇ)ᕤ docker://registry.stage.redhat.io/rhelai1/prometheus-8x7b-v2-0 model download completed successfully! ᕦ(òᴗóˇ)ᕤ INFO 2025-04-08 19:01:34,568 instructlab.model.download:303: Available models (`ilab model list`): +-----------------------------------+---------------------+---------+ | Model Name | Last Modified | Size | +-----------------------------------+---------------------+---------+ | models/granite-3.1-8b-lab-v1 | 2025-04-08 18:36:00 | 15.2 GB | | models/granite-3.1-8b-starter-v1 | 2025-04-08 18:39:07 | 15.2 GB | | models/mixtral-8x7b-instruct-v0-1 | 2025-04-08 18:50:32 | 87.0 GB | | models/prometheus-8x7b-v2-0 | 2025-04-08 19:01:34 | 87.0 GB | +-----------------------------------+---------------------+---------+ INFO 2025-04-08 19:01:52,114 instructlab.model.serve_backend:54: Setting backend_type in the serve config to vllm INFO 2025-04-08 19:01:52,131 instructlab.model.serve_backend:60: Using model '/var/home/cloud-user/.cache/instructlab/models/granite-3.1-8b-lab-v1' with -1 gpu-layers and 4096 max context size. INFO 2025-04-08 19:01:52,131 instructlab.model.serve_backend:92: '--gpus' flag used alongside '--tensor-parallel-size' in the vllm_args section of the config file. Using value of the --gpus flag. INFO 2025-04-08 19:01:52,246 instructlab.model.backends.vllm:332: vLLM starting up on pid 34 at http://127.0.0.1:8000/v1 INFO 04-08 19:03:00 api_server.py:585] vLLM API server version 0.6.4.post1 INFO 04-08 19:03:00 api_server.py:586] args: Namespace(host='127.0.0.1', port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template='/tmp/tmpao_8vt_r', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/var/home/cloud-user/.cache/instructlab/models/granite-3.1-8b-lab-v1', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', chat_template_text_format='string', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend='mp', worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=8, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False) INFO 04-08 19:03:00 api_server.py:175] Multiprocessing frontend to use ipc:///tmp/33b8604d-202f-4fc3-bc37-83817ddfce8a for IPC Path. INFO 04-08 19:03:00 api_server.py:194] Started engine process with PID 54 INFO 04-08 19:03:00 config.py:1861] Downcasting torch.float32 to torch.float16. INFO 04-08 19:03:06 config.py:1861] Downcasting torch.float32 to torch.float16. WARNING 04-08 19:03:08 arg_utils.py:1013] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False. WARNING 04-08 19:03:08 arg_utils.py:1075] [DEPRECATED] Block manager v1 has been removed, and setting --use-v2-block-manager to True or False has no effect on vLLM behavior. Please remove --use-v2-block-manager in your engine argument. If your use case is not supported by SelfAttnBlockSpaceManager (i.e. block manager v2), please file an issue with detailed information. INFO 04-08 19:03:08 config.py:1136] Chunked prefill is enabled with max_num_batched_tokens=512. WARNING 04-08 19:03:08 config.py:791] Possibly too large swap space. 32.00 GiB out of the 60.46 GiB total CPU memory is allocated for the swap space. WARNING 04-08 19:03:13 arg_utils.py:1013] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False. WARNING 04-08 19:03:13 arg_utils.py:1075] [DEPRECATED] Block manager v1 has been removed, and setting --use-v2-block-manager to True or False has no effect on vLLM behavior. Please remove --use-v2-block-manager in your engine argument. If your use case is not supported by SelfAttnBlockSpaceManager (i.e. block manager v2), please file an issue with detailed information. INFO 04-08 19:03:13 config.py:1136] Chunked prefill is enabled with max_num_batched_tokens=512. WARNING 04-08 19:03:13 config.py:791] Possibly too large swap space. 32.00 GiB out of the 60.46 GiB total CPU memory is allocated for the swap space. INFO 04-08 19:03:13 llm_engine.py:249] Initializing an LLM engine (v0.6.4.post1) with config: model='/var/home/cloud-user/.cache/instructlab/models/granite-3.1-8b-lab-v1', speculative_config=None, tokenizer='/var/home/cloud-user/.cache/instructlab/models/granite-3.1-8b-lab-v1', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=8, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=/var/home/cloud-user/.cache/instructlab/models/granite-3.1-8b-lab-v1, num_scheduler_steps=1, chunked_prefill_enabled=True multi_step_stream_outputs=True, enable_prefix_caching=False, use_async_output_proc=True, use_cached_outputs=True, chat_template_text_format=string, mm_processor_kwargs=None, pooler_config=None) ERROR 04-08 19:03:14 engine.py:366] please set tensor_parallel_size (8) to less than max local gpu count (1) ERROR 04-08 19:03:14 engine.py:366] Traceback (most recent call last): ERROR 04-08 19:03:14 engine.py:366] File "/opt/app-root/lib64/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine ERROR 04-08 19:03:14 engine.py:366] engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ERROR 04-08 19:03:14 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 04-08 19:03:14 engine.py:366] File "/opt/app-root/lib64/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args ERROR 04-08 19:03:14 engine.py:366] return cls(ipc_path=ipc_path, ERROR 04-08 19:03:14 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^ ERROR 04-08 19:03:14 engine.py:366] File "/opt/app-root/lib64/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__ ERROR 04-08 19:03:14 engine.py:366] self.engine = LLMEngine(*args, **kwargs) ERROR 04-08 19:03:14 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 04-08 19:03:14 engine.py:366] File "/opt/app-root/lib64/python3.11/site-packages/vllm/engine/llm_engine.py", line 347, in __init__ ERROR 04-08 19:03:14 engine.py:366] self.model_executor = executor_class(vllm_config=vllm_config, ) ERROR 04-08 19:03:14 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 04-08 19:03:14 engine.py:366] File "/opt/app-root/lib64/python3.11/site-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__ ERROR 04-08 19:03:14 engine.py:366] super().__init__(*args, **kwargs) ERROR 04-08 19:03:14 engine.py:366] File "/opt/app-root/lib64/python3.11/site-packages/vllm/executor/executor_base.py", line 36, in __init__ ERROR 04-08 19:03:14 engine.py:366] self._init_executor() ERROR 04-08 19:03:14 engine.py:366] File "/opt/app-root/lib64/python3.11/site-packages/vllm/executor/multiproc_gpu_executor.py", line 34, in _init_executor ERROR 04-08 19:03:14 engine.py:366] self._check_executor_parameters() ERROR 04-08 19:03:14 engine.py:366] File "/opt/app-root/lib64/python3.11/site-packages/vllm/executor/multiproc_gpu_executor.py", line 137, in _check_executor_parameters ERROR 04-08 19:03:14 engine.py:366] assert tensor_parallel_size <= cuda_device_count, ( ERROR 04-08 19:03:14 engine.py:366] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 04-08 19:03:14 engine.py:366] AssertionError: please set tensor_parallel_size (8) to less than max local gpu count (1) Process SpawnProcess-1: Traceback (most recent call last): File "/usr/lib64/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib64/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/opt/app-root/lib64/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine raise e File "/opt/app-root/lib64/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/app-root/lib64/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args return cls(ipc_path=ipc_path, ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/app-root/lib64/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__ self.engine = LLMEngine(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/app-root/lib64/python3.11/site-packages/vllm/engine/llm_engine.py", line 347, in __init__ self.model_executor = executor_class(vllm_config=vllm_config, ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/app-root/lib64/python3.11/site-packages/vllm/executor/distributed_gpu_executor.py", line 26, in __init__ super().__init__(*args, **kwargs) File "/opt/app-root/lib64/python3.11/site-packages/vllm/executor/executor_base.py", line 36, in __init__ self._init_executor() File "/opt/app-root/lib64/python3.11/site-packages/vllm/executor/multiproc_gpu_executor.py", line 34, in _init_executor self._check_executor_parameters() File "/opt/app-root/lib64/python3.11/site-packages/vllm/executor/multiproc_gpu_executor.py", line 137, in _check_executor_parameters assert tensor_parallel_size <= cuda_device_count, ( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: please set tensor_parallel_size (8) to less than max local gpu count (1) Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/opt/app-root/lib64/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 643, in <module> uvloop.run(run_server(args)) File "/opt/app-root/lib64/python3.11/site-packages/uvloop/__init__.py", line 105, in run return runner.run(wrapper()) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete File "/opt/app-root/lib64/python3.11/site-packages/uvloop/__init__.py", line 61, in wrapper return await main ^^^^^^^^^^ File "/opt/app-root/lib64/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 609, in run_server async with build_async_engine_client(args) as engine_client: File "/usr/lib64/python3.11/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/app-root/lib64/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 113, in build_async_engine_client async with build_async_engine_client_from_engine_args( File "/usr/lib64/python3.11/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/app-root/lib64/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 210, in build_async_engine_client_from_engine_args raise RuntimeError( RuntimeError: Engine process failed to start. See stack trace for the root cause.